Upload
allenwu
View
1.744
Download
1
Tags:
Embed Size (px)
Citation preview
COLLABORATIVE FILTERING USING ORTHOGONAL NONNEGATIVE MATRIXPresenter: Meng-Lun Wu
Authors: Gang Chen, Fei Wang and Changshui Zhang
Source: Information Processing and Management (2009), pp. 368-379
1
OUTLINE
Introduction Related Work Orthogonal nonnegative matrix tri-
factorization Framework Experiments Conclusion
2
INTRODUCTION
Collaborative filtering can predict a test user’s rating for new items based on similar users.
Collaborative filtering can be categorized into… Memory-based (similarity)
user-based and item-based
Model-based Establish a model using training examples.
3
INTRODUCTION (CONT.)
This paper apply orthogonal nonnegative matrix tri-factorization(ONMTF) to circumvent the two kinds of collaborative filtering.
ONMTF is applied to simultaneously co-cluster the rows and columns, and attain individual predictions for an unknown test rating.
This paper possesses the following superiorities: Sparsity problem Scalability problem Fusing prediction results
4
RELATED WORK
Researchers have proposed some hybrid approaches in order to combine the memory based and model based approaches. Xue et. al. (2005) resolve the sparsity and
scalability by using clusters to smooth ratings and clustering.
However, Xue only consider user-based approach, this paper extend the idea to integrate model based, user-based and item-based approaches.
5
RELATED WORK (CONT.)
Matrix decomposition can used to solve the co-clustering problem. Ding et al. (2005) proposed co-clustering based
on nonnegative matrix factorization. (NMF) In 2006, they proposed ONMTF.
Long et al. (2005) provided co-clustering by block value decomposition.
6
ORTHOGONAL NONNEGATIVE MATRIX TRI-FACTORIZATION
The NMF is first brought into machine learning and data mining fields by Lee et al. (2001).
Ding et al. (2006) proved the equivalence between NMF and K-means, and extended NMF to ONMTF.
The idea is to approximate the original matrix X to the combination matrix, and the optimization problem is
7lnlkkpnp
TTT
VSU
andwhere
IVVIUUtsUSVX
V S , U,X
, .. ,min2
0,0,0
ORTHOGONAL NONNEGATIVE MATRIX TRI-FACTORIZATION (CONT.)
The optimization problem can be solved using the following update rules.
After co-clustering, we could get the user centroid SVT and item centroid US. 8
ikTTik
T
ikik
ikTTik
T
ikik
jkTT
jkT
jkjk
VUSVU
XVUSS
XVSUU
XVSUU
USXVV
USXVV
)(
)(
)(
FRAMEWORK
Notations X = [u1,…,up]T, uj=(xj1,…,xjn)T, j{1,…,p}
X = [i1,…,in], im=(x1m,…,xpm)T, m{1,…,n}
9
MEMORY-BASED APPROACHES
User’s neighbor selection Compute the similarities between a user and all
the user-cluster centroids SVT. Select the top K user cluster as the user set uh. The item’s neighbor selection is similar.
The cosine similarity between the j1th user and the j2th user.
Given an user-item pair <uj, im>, where uh{the most similar K-user of uj}. 10
n
mmj
n
mmj
n
mmjmj
xx
xxsim
1
2
1
2
1
)()(
))((),(
21
21
21 jj uu
h
h
u jh
u jh
uu
uu
),(
))(,(
sim
uusimux
hhm
jjm
MEMORY-BASED APPROACHES
The adjusted-cosine similarity between the m1th and m2th items. (T is the set of users who both rated m1 and m2)
Given an user-item pair <uj, im>, where ih{the most similar K-items of im}
The final prediction result could be linearly combined the three different types of predictions.
11
Tt ttmTt ttm
Tt ttmttm
uxux
uxuxsim
22 )()(
))((),(
21
21
21 mm ii
h
h
i mh
i mh
ii
ii
),(
))(,(
sim
xsimx
jh
jm
jmjmjmjm xixuxnx ~~)1)(1(~~)1(~~~
ALGORITHM
1. The user-item matrix X is factorized as USVT by using ONMTF.
2. Calculate the similarities between the test user/item and user/item-cluster centroids.
3. Sort the similarities and select the most similar C user/item clusters as the test user/item neighbor candidate set.
4. Identify the most K neighbors of the test user/item by searching for the user/item candidate set.
5. Predict the unknown ratings by using user based and item based approaches.
6. Linearly combine three different predictions.
12
EXPERIMENTS
Dataset MovieLens: 500 users and 1000 items (1-5 scales)
Training set: the first 100, 200 and 300 users, called ML_100, ML_200 and ML_300.
Testing set: the last 200 users
We randomly selected 5, 10 and 20 items rated by test users, called Given5, Given10 and Given20.
Evaluation metric Mean absolute error (MAE) as evaluation metric.
Where N is the number of tested ratings.13
N
xxMAE mj jmjm
,
~
DIFFERENT CLUSTER
14
The ML_300 dataset is used for training, and try 10 different values of k or l (2,5,10,20,…,80)
PERCENTAGE OF NEIGHBORS
The percentage of pre-selected neighbors reaches around 30%.
15
SIZE OF NEIGHBORS
16
COMBINATION COEFFICIENTS
Fix =0, the optimal value of is approximately between 0.5 and 0.7.
17
COMBINATION COEFFICIENTS (CONT.)
Fix =0.6, the optimal value of is approximately between 0.2 and 0.4.
18
PERFORMANCE COMPARISON
19
Wang et al., 2006, similarity fusion (SF2) Xue et al., 2005, cluster-based Pearson correlation coefficient
(SCBPCC) Rennie and Srebro, 2005, maximum margin matrix factorization
(MMMF) Ungar and Foster, 1999, cluster-based collaborative filtering (CBCF) Hofmann and Puzicha, 1999, aspect model (AM) Pennock et al., 2000, personality diagnosis (PD) Breese et al., 1998, user-based Pearson correlation coefficient (PCC)
CONCLUSIONS
This paper presented a novel fusion framework for collaborative filtering.
The model-based and memory-based and naturally assembled via ONMTF.
Empirical studies verified our framework effectively improves the prediction accuracy.
Future work is investigate new co-clustering techniques and develop better fusion models.
20