36
CEGEG076 Spatio-Temporal Data Mining Prediction of crime levels in Washington DC, based on seasonality and census data Duccio Aiazzi and Sarah Hank April 1, 2016 Abstract In this study, we critically assess the performance of methods of machine- learning with the aim of predicting areas of high and low seasonal burglary rates in Washington, DC. We present two techniques: Random Forest (RF) and Support Vector Machine (SVM). It is generally agreed that Random Forest and SVM are amongst the best performing classifiers, and both have been used effectively in crime classification. In our experiment, SVM performed better than Random For- est by only a small margin which is likely not statistically significant. However, Random Forest proved to be a better performer in terms of ease of implementation, speed, and interpretability. Word count : 4500

CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

CEGEG076

Spatio-Temporal Data Mining

Prediction of crime levels in Washington DC,

based on seasonality and census data

Duccio Aiazzi and Sarah Hank

April 1, 2016

Abstract

In this study, we critically assess the performance of methods of machine-

learning with the aim of predicting areas of high and low seasonal burglary rates

in Washington, DC. We present two techniques: Random Forest (RF) and Support

Vector Machine (SVM). It is generally agreed that Random Forest and SVM are

amongst the best performing classifiers, and both have been used effectively in

crime classification. In our experiment, SVM performed better than Random For-

est by only a small margin which is likely not statistically significant. However,

Random Forest proved to be a better performer in terms of ease of implementation,

speed, and interpretability.

Word count : 4500

Page 2: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Contents

1 Introduction 2

2 Methods 2

3 Data 3

4 Exploratory Data Analysis 8

5 Random Forest - by Sarah Hank 12

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Analysis and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

6 SVM - by Duccio Aiazzi 17

The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Analysis and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

7 Comparison 23

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Limitations and further studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

8 Appendix 26

Random Forest result plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

SVM result plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

ACS variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

1

Page 3: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

1 Introduction

This experiment aims to compare the use of two classification algorithms to predict

levels of crime in Washington, DC: Support Vector Machines (SVM) (Vapnik & Cher-

vonenkis 1974) and Random Forest (Breiman 2001). We use various demographic,

economic, and housing factors to classify census tracts into either high or low crime

categories. Though the first idea was to develop a model to predict crime at a very fine

temporal scale using variables easy to monitor on a daily basis, the project later turned

to a model to predict crime levels from one year to the other using census data. There-

fore, the scope of the model changed from a daily prediction to a resource allocation

tool on a yearly or seasonal basis.

2 Methods

We chose to use SVM and Random Forest as they rank very high in terms of classifier

performances (Fernandez-Delgado et al. 2014) and because we are interested in spot-

ting the interaction of crime with other measurable variables. Of course other methods

would have been appropriate too: for example, given that crime is spatially and tem-

porally autocorrelated (Anselin et al. 2000), a Space Time AutoRegressive Integrated

Moving Average (STARIMA) or Space Time Scan Statistics (STSS) model would have

been effective in spotting hotspots and patterns (Olligschlaeger & Gorr 1997). How-

ever, these methods are limited to space and time factors and do not take into account

other variables.

The use of Random Forest as a predictor for crime is relatively new, and thus there

is not a large amount of literature exploring its effectiveness. The literature that does

exist, however, seems to tout the method’s success. Bogomolov et al. (2014) compared

methods of ANN, SVM, and Random Forest using human behavior from aggregated

mobile data in addition to demographic factors to predict crime hotspots in London

(Bogomolov et al. 2014). They concluded that Random Forest was the most successful

of these methods, with a successful prediction rate of 70%. Another study by Bre-

itenbach et al. (2009) compared random forests, support vector machines, gradient

2

Page 4: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

descent, neural networks, and ADTree, as well as logistic regression methods to pre-

dict violent arrests after prison release and also found Random Forest to perform the

best. Because SVM methods have been proposed much longer ago, more literature is

available on crime prediction. As a classifier, it is mainly used in finding hotspots, or

areas with relatively higher rates of crime. Kianmehr & Alhajj (2008) compare the suc-

cess of one-class and two-class SVMs against neural networks and Structure Activity

Relationships (SAR) in classifying crime hotspots in Columbus, Ohio and St. Louis,

Missouri, and find that two-class SVMs work the best.

3 Data

Figure 1: Average seasonal count of burglary per census tract, training set and test set

The crime data we investigate comes from the District of Columbia’s Metropolitan Po-

lice Department’s Crime Map tool (Metropolitan Police Department 2008). The data

contains crimes related to theft, robbery, and burglary and other violent crimes. Each

incident comes with associated date and time of occurrence, the type of crime (of-

fence), and the location of the crime presented as latitude and longitude, as well as

3

Page 5: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

the census tract in which it occurred. We use census tracts as our areal unit of analysis

since this gives us the ability to analyse census data which is also aggregated by census

tract. From Fig. 1 we can see that the distribution in space of burglary offences is quite

different between the average for the three years of the training set and 2014. Fig. 2,

shows how the main difference in the occurrence of burglary is the census tract: few

census tracts have in fact a low level of burglaries throughout the four years, where

the others are divided between the ones who experience a steady high rate and the

ones where there is more variation.

Figure 2: Heatmap of burglary occurrences by

month and by census tractOn the horizontal axis the month and on the vertical

one the census tract. The yellow lines represent areas

where the burglary rate is consistently low.

Some other areas are always red, but some others

shows some temporal pattern.

As mentioned earlier, we

originally intended to use weather

data as a variable to try to par-

tially predict crime rate varia-

tions on an almost daily ba-

sis. The idea came from the

literature review: the link be-

tween crime and weather is

well-documented, and generally

concludes that warmer temper-

atures result in more crime,

specifically crimes of aggression

(Cohn 1990). When we looked

at our data, though, we did

not find any correlation. In

fact, there is a clear seasonal

pattern in the count of crimes

and therefore a correlation be-

tween this and temperature. We

normalised temperature in two

ways: first by taking the absolute deviation from the 30 year historical average for

that day, and second by calculating a Z-score for that day’s average temperature rela-

tive to the average temperatures from the preceding 14 days. Once we eliminated the

4

Page 6: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

seasonality by normalising the temperature, we found that relative variation in tem-

perature has no correlation at all with crime rates. Therefore we decided to move our

attention to a longer term prediction based on census data.

Figure 3: Levels of incomeMap indicates clear segregation between wealthy and deprived areas.

Violent crime crime is, of course, associated with deprived conditions and there

is a vast body of literature on the subject: Ehrlich (1975) on the relation with edu-

cational attainment, Ellis et al. (2009) on unemployment, Patterson (1991) on income

and deprivation. For our study, we extracted a set of indicators from the American

Community Survey 5 year-estimates for each year from 2011 - 2014. The indicators are

related to household types, educational attainment, unemployment levels, income,

house occupancy, house values, and age classes. For the full list of variables see Fig.

8, in Section 8. Our goal is to spot non-linear dynamics amongst these variables and

the rates of crime per area. One example could be the subject of age class: it is reason-

able to expect more burglaries committed by younger persons. However, offenders do

not necessarily commit crimes in the places where they live, but rather might chose

to commit their crime in a residential area with higher housing value or income level.

Therefore, crime patterns will likely interact with income data both in terms of where

5

Page 7: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

offenders are and where they commit a crime.

Below is a visualisation of the data from four of the census input variables: Median

Income, Poverty Level, Median Age, and Unemployment Level. Each map visualises

the data for the year 2014, while each line chart shows the change over the four years

for the tracts with the highest and lowest values. These graphs provide context in both

the variability of the data as well as the landscape of the city.

Figure 4: Percentage of family below poverty levelsConcentration of poverty is mostly seen in southern part of the city.

The map of median income (Fig. 3, page 5) clearly displays a key feature of the

landscape of DC: inequality. Tracts west of 16th St (the road which divides DC from

the top point of the diamond) and tracts around Capitol Hill (the chunk of darker tracts

in the Eastern part of the city) are fairly distinctly separated from poorer tracts. The

max and min tracts show little variability over time except a slight dip in the values of

the max tract in 2013.

The map in Fig. 4 (page 6) shows high spatial clustering of high poverty tracts in

the south and south east, the poorest part of the city. Some of the darker tracts in the

north west quadrant contain college campuses. The min tract changes very little over

time, which is expected since the wealthier parts of the city will tend to stay wealthy.

6

Page 8: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 5: Median ageMedian age is highly variable across the city.

Figure 6: Unemployment rateUnemployment levels mirror segregation seen in maps of poverty level and median income.

7

Page 9: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

As expected, median age does not have much variability from 2011 - 2014 in the

max and min tracts (Fig. 5, page 7). Given the median age in the min tract hangs below

25, it is likely the location of a college campus where the rollover of students each year

would ensure the age stays consistent. A look at the map shows that the tracts with

the highest median age are in isolated long-term residential areas which have not been

affected by gentrification. This explains the lack of variability over time.

There is obvious spatial segregation displayed in Fig. 6 (page 7) of unemployment

levels. A large chunk of contiguous tracts in the north west have the lowest unemploy-

ment rates. A large tract in south east also displays very low unemployment while the

tracts around it have some of the highest rates. This is due to the presence of Bolling

Air Force base in that tract. Interestingly, this variable has the highest variability over

time, with the min and max tracts heading slightly toward convergence.

4 Exploratory Data Analysis

Figure 7: Seasonal variability of crime countsThe count of crime offences is clearly related to seasons.

In this section, we will examine the temporal and spatial autocorrelation properties

of the crime dataset. As we can see in Fig. 7, the count of crime offences is clearly

related to seasons. In order to quantify the extent to which near observations are more

similar than distant observations in time we plot the autocorrelation function (ACF)

8

Page 10: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 8: Auto correlation plot: over four years and close-up over three monthsThe ACF shows a clear seasonal affect which diminishes over the years. A zoom into one area of theplot shows an interesting effect on the weekends.

9

Page 11: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

(Fig. 8, page 9): values near 1 are close to perfect positive correlation and values near

-1 are close to perfect negative correlation. At lag 0, the correlation is 1, because we are

comparing the same point in time.

The graph on the right in Fig. 8 shows the ACF of the daily count of offences

with a lag of four years: there is a clear seasonal pattern as the positive peaks are all

at the years marks. This means that every season is strongly correlated from year to

year, although this correlation fades for further years. A closer view reveals also an

interesting autocorrelation by the day of the week.

Figure 9: Local Moran’s I for total burglary occurrences from 2011 - 2014 and relatedp-valuesLocal Moran’s I values seem to be strong in some areas, but only three areas are statistically significant.

The spatial correlation is less obvious: there are some areas which shows a high

spatial correlation (Fig. 9, page 10) but only three spots shows statistical significance.

The significant areas include an affluent area near the western border containing neigh-

bourhoods like Cleveland Park and McClean Gardens, while the significant area in

the far eastern corner of the city includes a notorious chunk of neighbourhoods which

have experienced high levels of violence in recent years. The significant area in the

middle includes the quickly gentrifying neighbourhoods of Eckington and H Street

NE. In this experiment we will consider the aggregation of crime offences related to

burglary by season, considering winter starting in December and each season during

10

Page 12: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

three months. The experiment could have been run using the count aggregated by

month (and predicting by the month) but once subsetted by type of offence and by

census tract, the data would have been too sporadic and we would ended up with

counts with too much variation and little statistical significancy.

11

Page 13: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

5 Random Forest - by Sarah Hank

Overview

Figure 10: Maps of real classification vs. Random Forest predictionClassifies high crime areas as low crime areas more often than the opposite error.

Random Forest, first proposed by Breiman (2001), is based on the tree classification

method. In this method, many classification trees are grown, and input vectors are sent

down through each tree, receiving a classification. In this way, the tree ”votes” for that

classification. The classification which is ”voted for” by the most trees, wins. The pa-

rameters available for alteration include the number of trees (n) to be grown, and the

number of variables (m) to be used to split each node (ntree and mtry respectively in the

randomForest package in R). At each node of the tree, m number of variables are cho-

sen at random from the input variables, and the best split is taken from among them at

that node. The parameter m must of course be less than the total number of variables.

Two factors are shown to increase the error rate of a random forest: the correlation

between trees (more correlation means higher error), and the strength of each indi-

vidual tree in the forest (stronger classifiers mean lower error) (Breiman 2001). While

the number of trees grown does not significantly impact the error rate, decreasing the

number of split values lowers both the correlation and the strength (Breiman 2001).

12

Page 14: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Thus, m is the only parameter that has a significant effect on the error rate of Random

Forest. Unlike in other tree classification methods, there is no pruning.

Random Forest has several advantages over other classification algorithms, includ-

ing the fact that overfitting is not an issue (Breiman & Cutler 2015). It can handle

thousands of input variables, runs relatively quickly, and has a high rate of accuracy

relative to other methods. Of special note is the fact that it can provide estimates of

which variables are most important in determining the classification output.

Experimental Setup

Figure 11: Classification errors with Random Forest

One advantage of Random For-

est is that it can accept cate-

gorical variable inputs, however

in order to have comparable re-

sults, we used the same ini-

tial data inputs for both SVM

and Random Forest. As will

be discussed in the Experimen-

tal Setup section of SVM, the

seasonal variable was converted

from a categorical value (SEA-

SON = ”Summer”) to a binary

numerical value (SUMMER = 1).

We chose to focus on the crime

of burglary, and so aggregated

the count of incidents of bur-

glary by year, season, and cen-

sus tract. We then calculated a

label for each observation, considering any value below the median of the count as a

’low” crime area, and anything above as a ”high” crime area. These were labeled as -1

and 1 respectively. This formed our initial input data.

13

Page 15: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

The initial training data considered the years 2011, 2012, and 2013, with the test

data being Summer 2014. We chose this time span because the demographic data

from the ACS was last updated for 2014, and thus we could not test 2015. We tested

against only one season at a time since the crime count was aggregated by season, and

thus it makes sense for our output labels to only be applied to a single season. This is

how the tool would be used to forecast police resources.

Analysis and Results

Figure 12: Variable Importance Plot table

I trained the model on this data using

various combinations of parameters. I

varied the number of trees (ntree) be-

tween 200, 500 and 1000, and the number

of split variables between 2 and 4. I ran

each configuration ten times, predicted

values for summer 2014, and then aver-

aged the error. The full set of resulting er-

rors on each configuration can be viewed

in Fig. 14. The best performing configu-

ration of this set was ntree = 1000, mtry =

2 with an error rate of 31.8%. This error

rate, while the best, was not much better

than other configurations that used mtry = 2. Increasing mtry consistently increased

the error rate. This outcome is consistent with the idea that the number of trees does

not have a great effect on error rate, however the model is sensitive to the number of

split variables (Breiman 2001). Next, I narrowed the training data to only the first year,

2011, and reran the model. The benefit of using only one year of training would be

decreasing run time and general simplification. The optimal configuration for this set

was ntree = 500, mtry = 2 with an error of 31.8%, the same as when using all years of

data.

During the first set of configurations, I recorded the Variable Importance Plot val-

14

Page 16: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

ues into a table and found the average of the values for mean decrease in node impu-

rity (Fig. 12, page 14). This tells us which variables are most important contributors to

the model. This is a capability that is unique to Random Forest. The top four variables

are Household Type: Female householder, no husband present, family - With own

children under 18 years; Household Type: Female householder, no husband present,

family; Educational Attainment: High school graduate or equivalent; and Age: 15 to

19 years old (Fig. 13, page 15).

Figure 13: Variable Importance Plot chart

To see how accurately a simplified

model would perform, I trained the

model on these four top performing vari-

ables for the years 2011 - 2013, predicted

for summer 2014, and averaged the er-

rors for the various configurations. The

minimum error increased noticeably (to

37.1%), but considering that 87% of the

variables were removed, this is actually

impressive. Given that one of the benefits

of machine learning is that it can handle a

large amount of input variables, and that

the processing time was not vastly im-

proved, it seems reasonable to keep all of

the original variables.

Fig. 10 (page 12) and Fig. 11 (page 13) show the results of running classification for

burglary using the optimal parameters of ntree = 200, mtry = 2 on a training dataset

using the years 2011 - 2013 and testing for the summer of 2014. The resulting error

was 0.327 or 32.7% on this trial. Random Forest predicts with a similar rate of error

(32.7%) in both summer and spring, but performs best in winter with an error of 29.6%

(Figures ??, through ??, page ??). It performs the worst in autumn. In all seasons, this

method seems to falsely predict high crime areas as low crime areas at an average rate

of 23.3%. This effect can be seen in Fig. 24, page 28 which shows the rate at which the

method wrongly classifies each type.

15

Page 17: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 14: Table of error rates of tested Random Forest parameter configurationsHighlighted rows indicate highest performing configuration for the given set of training data.

16

Page 18: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

6 SVM - by Duccio Aiazzi

The algorithm

Support Vector Machines (SVM) are a set of supervised machine learning models used

for classification and regression analysis. Given a set of training data with each obser-

vation assigned to a binary category, SVM builds a model which can take a new set

of data and return the labelling for it. The basic version of SVM is a non-probabilistic

binary linear classifier: data is labelled with one or two categories and the classifica-

tion is obtained by linear separation. The input is represented in the feature space

as a set of points to be divided by a clear margin which is as wide as possible. New

examples are represented in the same space and labels are predicted based on which

side of the margin they fall in. The maximum gap is found by finding the separating

hyperplane and maximising the distance of the plane form the points that are used to

define the margin (Support Vectors). The hyperplane is a subspace of one dimension

less than the ambient space. This means, for example, that the hyperplane of a two

dimensional space such as the cartesian axis is a line. When the margin is wide, the

confidence in the model is high, when the margin is very small the confidence is low.

In this case and when data is non-separable, it is possible to introduce soft margins

(Cortes & Vapnik 1995), by allowing a trade-off between the complexity of the model

and the error. SVM can solve non-linear classification by mapping the input space into

higher or infinite dimension space using the kernel trick (Boser et al. 1992). Kernels

are weighting functions computed based on similarity-difference (objects less differ-

ent have higher weights) first proposed by Aizerman et al. (1964). They are used in

statistics (e.g. Kernel Density Estimation KDE for estimating the probability density

function) and in spatial and temporal analysis (e.g. Spatial and Spatio Temporal KDE

for modelling spatial and temporal decay). SVM can be used also for non-binary clas-

sifications using algorithms that reduce multi-class tasks to several binary problems.

17

Page 19: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 15: Prediction from the optimised model for summer 2014Although not very clear, it appears that the errors lie at the borders between clusters of high crime levelcensus tracts.

18

Page 20: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Analysis and Results

For the purpose of this essay, I will be using SVM classification methods using the

Gaussian Radial Basis Function

k(x, x′) = exp(−||x− x′||2

2σ2)

as the base kernel method although I will also test the results with the polynomial

method. According to StatSoft (2015), there are two classification methods, C-SVC

and nu-SVM which differ in the error function that they minimise. SVM method us-

ing C-SVC is defined by the following parameters: C is the cost function, which control

how much the error is penalised - hence the trade-off between complexity and predic-

tion accuracy - and σ is the rate of distance decay of the Gaussian kernel (high values

correspond to slow decays). For optimisation of the two parameters I will be using the

caret package in order to initialise a k-fold cross validation with k set to 10. The chosen

set of parameters is used to train the model and the model is then tested with the data

from summer 2014. The data manipulation consisted in merging multiple years of the

ACS dataset into one data frame and select the chosen variables (see Fig.8, Section

8). The crime dataset was subset by selecting the lines containing burglary incidents,

only, then the incidents were categorised by the season they fell in and the data ag-

gregated to obtain the count of incidents by season for every year and every census

tract.Because SVM does not take categorical variables, the variable season was split

in a four dimensional binary array. These two dataset were joined by census tract, so

that for each year/season/census tract we have a count of incidents, the season and

the census data related to the specific census tract. The data was finally labelled -1 or 1

depending on whether the count was above or below the median of the training years

and split in training set (years 2011, 2012, 2013) and test set (year 2014).

As a first test, I am interested in determining whether there is a difference in using one

or more years as training set. The accuracy of the model is highest when training on

2011 only (Table 1, page 20), where using all the years perform slightly worse. Given

that 2012 and 2013 also perform slightly worse than 2011, the good performance of the

latter could be due to the fact that in respect to the variables used, 2011 might be a sim-

19

Page 21: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Training Years Accuracy Accuracy SD2011 0.736 0.0292012 0.716 0.0322013 0.670 0.020All 0.680 0.032

Table 1: Accuracy by number of years used as training set

σ Grid C Grid Optimal σ Optimal C Accuracy Conf Int 0.950.001, 0.01, 0.1 10,100, 1000 0.001 100 0.721 ±0.0420.0005, 0.001, 0.0015 50,100,150 0.0015 100 0.722 ±0.0500.0012, 0.0015, 0.0018 80, 100, 120 0.0012 100 0.7202 ±0.0560.0011, 0.0012, 0.0013 95, 100, 105 0.0013 105 0.725 ±0.057

Table 2: Parameter optimisation

ilar year to 2014 and using one year only of data would increase the risk of overfitting.

For the rest of the test, I will be using all the three years as training set.

A first grid search with parameters σ = 0.001, 0.01, 0.1 and C = 10, 100, 1000 sug-

gests that the value of σ = 0.01 and C = 10 gives the best model performance with an

accuracy of 0.71. I can refine the search grid by using values around the best perform-

ers.

Table 2 shows different results for a sequential refinement of the parameters (see

Figure 16): σ = 0.0013 and C = 105 give the best performance but the resulting accu-

racy is not statistically different from the other scenarios. The use of a higher k for the

cross validation would reduce the standard deviation and increase the precision of the

search grid, but the performances are all quite similar and it would lead to no much

improvement. The use of polynomial kernel gives rather similar results in terms of ac-

curacy, with the best accuracy performance of 0.71 achieved with degree = 2 and scale

= 0.05. In terms of time performance, all the configurations complete the training in

between 25 and 35 seconds, with the exception of the polynomial kernel optimisation,

20

Page 22: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 16: C plotted against accuracy by σ valuesIn this case, for any σ, the best C value appears to be 105. Based on previous optimisation, the optimalvalue appears to be between 105 and 120.

which takes over an hour, without any improvement in the prediction accuracy.

Figure 17: Number of support vec-

tors

An attempt at simplifying the model has been

made by narrowing down the variables to the

four most important selected with RF in Section

5. This slightly reduces the accuracy to about 66%,

but it also reduces the training time to about 10s in

the case of the Gaussian kernel and drops it dras-

tically in the case of the polynomial one, down to

about 40s.

Figure 15 shows the results of the model when

predicting the rate of crime in summer 2014 com-

pared to the real data. In this example the model

predicts correctly the crime level in about 30% of

the census tracts. Although this could be considered a decent prediction, the problem

is that, as it is clear in Figure 17, there is a high number of support vectors. This is a

21

Page 23: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

clue of overfitting, which is generally addressed with the parameter optimisation as

above or more data points. In this case the problem (and the limitation of this whole

exercise) might be in the data and in the little variation of the census data from one

year to another. In Section 8, the results of the prediction for the other three seasons

are presented. The best results are obtained in winter with an error rate of 29%(Fig. 25,

page 29) and the worst in autumn with 36%(Fig. 29, page 31). An interesting aspect to

would require further investigation is that there seems to be a correlation (see related

barplots) between the amount of census tracts labelled above average and the error

rates, suggesting that the model fails the most where the crime is high.

22

Page 24: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

7 Comparison

Overview

Figure 18: Error comparison between RF and SVM

In grey the census tracts correctly labeled and in black the errors.

It is generally agreed that Random Forest and SVM are amongst the best performing

classifiers: Fernandez-Delgado et al. (2014) found that they consistently perform better

than all the other algorithms in a long list and, although Random Forest is ranked at

the first place and SVM second, there is no statistical difference in the performances

of the two. In the present experiment, SVM performed better than Random Forest

by only a small margin (Fig. 18), about 2-3%. This difference is quite small and it

might not be statistically relevant. In Figure 19, we can see how the errors are most

of the time the mislabelled census tracts overlap: the two algorithms perform almost

the same and with the same behaviour. Amongst the qualities of Random Forest, the

23

Page 25: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

simplicity of implementation and the relatively low number of parameters to be set

are generally mentioned as advantages.

Figure 19: Overlap of the results.

Furthermore, Random Forest can take

categorical variables and it is usually a

fast performer. SVM performs well when

there is a high number of dimensions

(it is used, for example, in text classifi-

cation) and is very adaptable because it

can use a vast number of kernel func-

tions. On the other hand, this means a

lot of parameters and different functions

to choose from make the implementation

more complex. It also requires the cate-

gorical variables to be split into multiple

binary variables, which can be daunting

when the categories are more than two or

three. On the plus side, SVM can be used

in regression analysis. For the purpose of

this experiment, the running time turned

out not to be an issue and the training

time was almost always within a minute. Both model have to be run several time:

RF to average the results, and SVM to calibrate the parameters by means of search

grid. With SVM, when used with a polynomial kernel, the training time grows from

few minutes to hours without much improvement in the results. Overall in the case

considered here, if the accuracy of the results is the priority, SVM is the best choice.

On the other hand, where simplicity and time are the priorities, Random Forest would

probably be a better solution.

24

Page 26: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Limitations and further studies

The experiment was designed at the beginning as a tool to predict crime in the short

term using variables that can be monitored on a daily or hourly basis. Based on lit-

erature review and data analysis which suggest seasonality, temperature and precip-

itation were considered as possible candidates. It turned out that there is little or no

direct connection between crime and weather in our datasets, therefore we decided

to redesign the experiment to create a tool to predict crime levels on the long term

for resource allocation from one year to the next. We decided to aggregate the data

by seasons but it would be interesting to predict by the month. As a first step to im-

prove the model, the training data should be extended to cover more years rather than

just the three used here. These would also reduce the risk of overfitting, which we

suspect is at work in our case. It would also be interesting to train the data on other

American cities, possibly assigning categories to census tracts about the primary and

secondary activity of the area and the presence of other features such as museums,

harbours, monuments, etc. The ACS survey contains a high number of variables; a

further exploration of these variables would certainly increase the accuracy. The level

of unemployment is monitored on a monthly basis by the US Bureau of Labor Statistics

and could therefore be included at a finer scale, say by season rather than yearly. An-

other interesting addition would be to include the level of police forces on the territory

from the past years to include the interaction between crime level and police patrolling

level. These would make the tool more useful, as it would introduce a feedbak to the

resource allocation and could be used to assess policies. For all these improvements, it

is likely that Random Forest would be our choice because of the handling of categories,

the ease of implementation and the possibility to determine the relative importance of

the variables in the classification process.

25

Page 27: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

8 Appendix

Random Forest result plots

Figure 20:

Figure 21:

26

Page 28: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 22:

Figure 23:

27

Page 29: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 24:

28

Page 30: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

SVM result plots

Figure 25:

Figure 26:

29

Page 31: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 27:

Figure 28:

30

Page 32: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Figure 29:

Figure 30:

31

Page 33: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

ACS variables

ACS Table DP02Variable Code2011-2012 2013-2014 DescriptionHC01_VC04 HC01_VC04 Estimate; HOUSEHOLDS BY TYPE - Family households (families)HC01_VC06 HC01_VC05 Estimate; HOUSEHOLDS BY TYPE - Family households (families) - With own children under 18 yearsHC01_VC07 HC01_VC06 Estimate; HOUSEHOLDS BY TYPE - Family households (families) - Married-couple family

HC01_VC08 HC01_VC07Estimate; HOUSEHOLDS BY TYPE - Family households (families) - Married-couple family - With own children under 18 years

HC01_VC09 HC01_VC08Estimate; HOUSEHOLDS BY TYPE - Family households (families) - Male householder, no wife present, family

HC01_VC10 HC01_VC09Estimate; HOUSEHOLDS BY TYPE - Family households (families) - Male householder, no wife present, family - With own children under 18 years

HC01_VC11 HC01_VC10Estimate; HOUSEHOLDS BY TYPE - Family households (families) - Female householder, no husband present, family

HC01_VC12 HC01_VC11Estimate; HOUSEHOLDS BY TYPE - Family households (families) - Female householder, no husband present, family - With own children under 18 years

HC01_VC75 HC01_VC76 Estimate; SCHOOL ENROLLMENT - Population 3 years and over enrolled in schoolHC01_VC85 HC01_VC86 Estimate; EDUCATIONAL ATTAINMENT - Less than 9th grade

HC01_VC86 HC01_VC87 Estimate; EDUCATIONAL ATTAINMENT - 9th to 12th grade, no diploma

HC01_VC87 HC01_VC88 Estimate; EDUCATIONAL ATTAINMENT - High school graduate (includes equivalency)HC01_VC88 HC01_VC89 Estimate; EDUCATIONAL ATTAINMENT - Some college, no degreeHC01_VC89 HC01_VC90 Estimate; EDUCATIONAL ATTAINMENT - Associate's degreeHC01_VC90 HC01_VC91 Estimate; EDUCATIONAL ATTAINMENT - Bachelor's degree

HC01_VC91 HC01_VC92Estimate; EDUCATIONAL ATTAINMENT - Graduate or professional degree

ACS Table DP03Variable Code2011-2012 2013-2014 DescriptionHC03_VC13 HC03_VC12 Estimate; EMPLOYMENT STATUS - Percent UnemployedHC01_VC75 HC01_VC75 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - Less than $10,000HC01_VC76 HC01_VC76 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $10,000 to $14,999HC01_VC77 HC01_VC77 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $15,000 to $24,999HC01_VC78 HC01_VC78 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $25,000 to $34,999HC01_VC79 HC01_VC79 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $35,000 to $49,999HC01_VC80 HC01_VC80 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $50,000 to $74,999HC01_VC81 HC01_VC81 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $75,000 to $99,999HC01_VC82 HC01_VC82 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $100,000 to $149,999HC01_VC83 HC01_VC83 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $150,000 to $199,999HC01_VC84 HC01_VC84 Estimate; INCOME AND BENEFITS (IN 2012 INFLATION-ADJUSTED DOLLARS) - $200,000 or more

HC03_VC166 HC03_VC171Estimate; PERCENTAGE OF FAMILIES AND PEOPLE WHOSE INCOME IN THE PAST 12 MONTHS IS BELOW THE POVERTY LEVEL

32

Page 34: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

ACS Table DP04Variable Code2011-2012 2013-2014 DescriptionHC01_VC05 HC01_VC05 Estimate; HOUSING OCCUPANCY - Total housing units - Vacant housing units

HC01_VC64 HC01_VC65 Estimate; HOUSING TENURE - Occupied housing units - Renter-occupied

HC01_VC117 HC01_VC119 Estimate; VALUE - Owner-occupied units - Less than $50,000HC01_VC118 HC01_VC120 Estimate; VALUE - Owner-occupied units - $50,000 to $99,999HC01_VC119 HC01_VC121 Estimate; VALUE - Owner-occupied units - $100,000 to $149,999HC01_VC120 HC01_VC122 Estimate; VALUE - Owner-occupied units - $150,000 to $199,999HC01_VC121 HC01_VC123 Estimate; VALUE - Owner-occupied units - $200,000 to $299,999HC01_VC122 HC01_VC124 Estimate; VALUE - Owner-occupied units - $300,000 to $499,999HC01_VC123 HC01_VC125 Estimate; VALUE - Owner-occupied units - $500,000 to $999,999HC01_VC124 HC01_VC126 Estimate; VALUE - Owner-occupied units - $1,000,000 or more

HC01_VC192 HC01_VC197Estimate; GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME - Occupied units paying rent - Less than 15.0 percent

HC01_VC193 HC01_VC198Estimate; GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME - Occupied units paying rent - 15.0 to 19.9 percent

HC01_VC194 HC01_VC199Estimate; GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME - Occupied units paying rent - 20.0 to 24.9 percent

HC01_VC195 HC01_VC200Estimate; GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME - Occupied units paying rent - 25.0 to 29.9 percent

HC01_VC196 HC01_VC201Estimate; GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME - Occupied units paying rent - 30.0 to 34.9 percent

HC01_VC197 HC01_VC202Estimate; GROSS RENT AS A PERCENTAGE OF HOUSEHOLD INCOME - Occupied units paying rent - 35.0 percent or more

ACS Table DP05Variable Code2011-2012 2013-2014 DescriptionHC01_VC07 HC01_VC08 Estimate; SEX AND AGE - Under 5 yearsHC01_VC08 HC01_VC09 Estimate; SEX AND AGE - 5 to 9 yearsHC01_VC09 HC01_VC10 Estimate; SEX AND AGE - 10 to 14 yearsHC01_VC10 HC01_VC11 Estimate; SEX AND AGE - 15 to 19 yearsHC01_VC11 HC01_VC12 Estimate; SEX AND AGE - 20 to 24 yearsHC01_VC12 HC01_VC13 Estimate; SEX AND AGE - 25 to 34 yearsHC01_VC13 HC01_VC14 Estimate; SEX AND AGE - 35 to 44 yearsHC01_VC14 HC01_VC15 Estimate; SEX AND AGE - 45 to 54 yearsHC01_VC15 HC01_VC16 Estimate; SEX AND AGE - 55 to 59 yearsHC01_VC16 HC01_VC17 Estimate; SEX AND AGE - 60 to 64 yearsHC01_VC17 HC01_VC18 Estimate; SEX AND AGE - 65 to 74 yearsHC01_VC18 HC01_VC19 Estimate; SEX AND AGE - 75 to 84 yearsHC01_VC19 HC01_VC20 Estimate; SEX AND AGE - 85 years and over

33

Page 35: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

References

Aizerman, A., Braverman, E. M. & Rozoner, L. (1964), ‘Theoretical foundations of the

potential function method in pattern recognition learning’, Automation and remote

control 25, 821–837.

Anselin, L., Cohen, J., Cook, D., Gorr, W. & Tita, G. (2000), ‘Spatial analyses of crime’,

Criminal justice 4(2), 213–262.

Bogomolov, A., Lepri, B., Staiano, J., Oliver, N., Pianesi, F. & Pentland, A. (2014), Once

upon a crime: towards crime prediction from demographics and mobile data, in

‘Proceedings of the 16th international conference on multimodal interaction’, ACM,

pp. 427–434.

Boser, B. E., Guyon, I. M. & Vapnik, V. N. (1992), A training algorithm for optimal

margin classifiers, in ‘Proceedings of the fifth annual workshop on Computational

learning theory’, ACM, pp. 144–152.

Breiman, L. (2001), ‘Random forests’, Machine learning 45(1), 5–32.

Breiman, L. & Cutler, A. (2015).

URL: https://www.stat.berkeley.edu/˜breiman/RandomForests/cc_

papers.htm

Breitenbach, M., Dieterich, W., Brennan, T. & Fan, A. (2009), ‘Creating risk-scores in

very imbalanced datasets–predicting extremely violent crime among criminal of-

fenders following release from prison’.

Cohn, E. G. (1990), ‘Weather and crime’, British journal of criminology 30(1), 51–64.

Cortes, C. & Vapnik, V. (1995), ‘Support-vector networks’, Machine learning 20(3), 273–

297.

Ehrlich, I. (1975), On the relation between education and crime, in ‘Education, income,

and human behavior’, NBER, pp. 313–338.

Ellis, L., Beaver, K. M. & Wright, J. (2009), Handbook of crime correlates, Academic Press.

34

Page 36: CEGEG076 Spatio-Temporal Data Mining Prediction of crime ... · mobile data in addition to demographic factors to predict crime hotspots in London (Bogomolov et al. 2014). They concluded

Fernandez-Delgado, M., Cernadas, E., Barro, S. & Amorim, D. (2014), ‘Do we need

hundreds of classifiers to solve real world classification problems?’, The Journal of

Machine Learning Research 15(1), 3133–3181.

Kianmehr, K. & Alhajj, R. (2008), ‘Effectiveness of support vector machine for crime

hot-spots prediction’, Applied Artificial Intelligence 22(5), 433–458.

Metropolitan Police Department (2008).

URL: http://crimemap.dc.gov/

Olligschlaeger, A. & Gorr, W. (1997), ‘Spatio-temporal forecasting of crime’.

Patterson, E. B. (1991), ‘Poverty, income inequality, and community crime rates’, Crim-

inology 29(4), 755–776.

StatSoft (2015), ‘Support Vector Machine’.

URL: http://www.statsoft.com/textbook/support-vector-machines

Vapnik, V. N. & Chervonenkis, A. J. (1974), ‘Theory of pattern recognition’.

35