Upload
xidian
View
2
Download
0
Embed Size (px)
Citation preview
ORIGINAL ARTICLE
Robust tensor subspace learning for anomaly detection
Jie Li • Guan Han • Jing Wen • Xinbo Gao
Received: 28 November 2010 / Accepted: 1 March 2011 / Published online: 12 March 2011
� Springer-Verlag 2011
Abstract Background modeling plays an important role
in many applications of computer vision such as anomaly
detection and visual tracking. Most existing algorithms for
learning appearance model are vector-based methods
without maintaining the 2D spatial structure information of
objects in an image. To this end, a robust tensor subspace
learning algorithm is developed for background modeling
which can capture the appearance changes through adap-
tively updating the tensor subspace. In the tensor frame-
work, the spatial structure information is maintained and
utilized for feature extraction of objects. Then by incor-
porating the robust scheme, we can weight individual pixel
of an image to reduce the influence of outliers on back-
ground modeling. Furthermore an incremental algorithm
for the robust tensor subspace learning is proposed to adapt
to the variation of appearance model. The experimental
results illustrate the effectiveness of the proposed robust
learning algorithm for anomaly detection.
Keywords Background modeling � Tensor subspace �Robust learning � Incremental learning � Anomaly detection
1 Introduction
Anomaly detection in video surveillance by using station-
ary cameras to monitor an environment of interest has
gained much more attention to public security, due to the
increasing societal threats from terrorists and crime, which
can be accomplished by learning background model rep-
resenting normal state and identifying image regions that
anomalous with respect to that background model. In this
paper, we focus on detecting drastic changes of the ‘‘nor-
mal’’ background model. It is suitable for a wide variety of
scenarios, such as outdoor parking lots, airport lounge,
market hall entrance and etc. However, the main challenge
of background representation can be attributed to handling
the appearance variation of the scene over time. Illumina-
tion changes and camera shaking are regarded as extrinsic
appearance variation, whereas intrinsic changes are resul-
ted from object motion and pose variation in the scene [5].
Therefore, robust modeling such appearance variation is of
great importance to anomaly detection.
In recent years, many works have been done in back-
ground modeling. Oliver et al. [4] firstly proposed eigen-
background modeling by preforming principal component
analysis (PCA) method. The background model can be
represented by the mean image and linear combination of
the first p significant eigenvectors. However, the traditional
PCA is sensitive to outliers, which can be absorbed into the
background model. To enhance the robustness of PCA
model, Xu and Yuille [14] introduced a binary variable to
consider an entire contaminated image as an outlier and
discard it. Torre and Black [9] described a method of robust
subspace learning based on robust M-estimation, which is
capable of constructing weight for each pixel in every
image. However, these methods are performed in batch
mode, thus requiring all training images to be given in
advance. The model has to be recomputed from scratch if
the subspace is updated sequentially with new images [11].
Another drawback is intensive computation because the
optimization problem has to be solved iteratively. To tackle
these problems, several methods have been developed for
online learning. Li et al. [3] proposed an incremental and
J. Li � G. Han � J. Wen � X. Gao (&)
Video and Image Processing System Lab,
School of Electronic Engineering,
Xidian University, Xi’an 710071, China
e-mail: [email protected]; [email protected]
123
Int. J. Mach. Learn. & Cyber. (2011) 2:89–98
DOI 10.1007/s13042-011-0017-0
robust subspace learning algorithm. Moreover, a weighted
incremental PCA algorithm for subspace learning is pre-
sented [6], which enables assigning arbitrary temporal and
spatial weights. These aforementioned methods for back-
ground modeling represent an image as a vector in high-
dimensional space, ignoring the underlying local geometric
structure information. Consequently, model representation
with image-as-matrix learning methods has attracted much
more attention in recent years. Multilinear algebra is uti-
lized to make analysis of image ensembles resulting from
the confluence of different factors related to scene structure,
illumination, and viewpoint [13]. He et al. [1] presented a
new algorithm for image representation taking multilinear
algebra and differential geometry into consideration, called
tensor subspace analysis (TSA). Sun et al. [7] developed the
dynamic and streaming tensor analysis to solve high order
tensor problems. Li et al. [2] employed a tensor subspace
learning for visual tracking. Tao et al. [8] proposed a
Bayesian tensor analysis method and applied it to 3D face
modeling. However, these tensor-based algorithms share an
open problem that they do not introduce a robust analysis
scheme for subspace learning, as observations may contain
outliers in real world.
In this paper, in order to automatically learn and update
a low dimensional subspace representation of background
model in tensor space, a robust tensor subspace learning
(RTSL) is proposed. Firstly, an image is represented as a
second order tensor, or a matrix. Therefore, the relationship
between the row vectors of the matrix as well as that
between the column vectors could be captured for model
representation [1, 12]. Secondly, in combination with
robust analysis, a weighted tensor model is built to
decrease the influence of outliers on background modeling.
Thirdly, an incremental algorithm is developed to adap-
tively reflect the appearance changes of the scene by
updating the mean and covariance when a new observation
arrives, which enables efficient estimation of the tensor
subspace for real-time performance. Finally we apply the
proposed algorithm to video anomaly detection with sim-
ilarity measurement between the reference and current
background model.
The rest of this paper is organized as follows. The
details of the presented robust tensor subspace learning
algorithm are described in Sect. 2. Application of anomaly
detection is introduced in Sect. 3. Experimental results and
analysis are presented in Sect. 4. Finally we give a con-
clusion about this paper and our future work.
2 Robust tensor subspace learning
Multilinear algebra or the algebra of high-order tensors is
usually used to make analysis of multifactor structure of
image ensembles [11, 13]. In this section, we first briefly
introduce mathematical notation and operations for tensors.
Next we describe a weighted tensor representation for new
observations, weighting individual pixel with different
weights. Finally we propose an incremental learning
algorithm for tensor subspace to efficiently update the
eigenspace.
2.1 Tensor analysis
A tensor is a higher generalization of a vector (first order
tensor) and a matrix (second order tensor) [7]. Tensors
define multilinear operators over a set of vector spaces. In
this paper, we denote scalars by lower case letters (e.g., a),
vectors by bold lower case letters (e.g., a), matrices by bold
upper-case letters (e.g., A), and higher-order tensors by
Euclid math upper-case letters (e.g., A).
We denote an N-order tensor as A 2 RI1�I2�...�IN ; where
the dimensionality of the nth mode (or dimension) of A is
In (1 B n B N). Elements of A are denoted as
Aði1; i2; . . .; iNÞ or ai1i2;...;iN The mode-n matrix unfolding of
an N-order tensor A is vectors in RIn ; obtained by keeping
index n fixed and varying the other indices. Hence the
mode-n matrix unfolding A(n) is in RIn�
Qd 6¼n
Id
� �
.
The mode-n product of a tensor A 2 RI1�I2�...�IN and a
matrix U 2 RJn�In is denoted by A�n U; and defined as
ðA �n UÞði1; . . .; in�1; jn; inþ1; . . .iNÞ
¼XIn
in¼1
Aði1; i2; . . .; iNÞUðjn; inÞ: ð1Þ
The result is a tensor in RI1�...In�1�Jn�Inþ1�...IN . In general,
a tensor A can multiply a sequence of matrices
UnNn¼1 2 R
Jn�In�� as A�1 U1 � . . .�N UN 2 RJ1�J2�...�JN ;
which can be written as AQN
i¼1
�iUi. An N-order tensor
can also be decomposed as the mode-n product of the
orthogonal space spanned by UiNi¼1
�� ; which is defined as
A ¼ BYN
i¼1
�iUi; ð2Þ
where B is the core tensor defined as
B ¼ AYN
i¼1
�iUTi : ð3Þ
2.2 Weighting tensor representation
In general, the robust problem is that the positions of
outliers are unknown beforehand. However, it is aware that
observations may contain outliers, which are not consistent
90 Int. J. Mach. Learn. & Cyber. (2011) 2:89–98
123
with current background model [6]. The reconstruction
error of inliers should be smaller than that of outliers,
which makes outliers detection easier. This can be
accomplished by projecting new observations into the
current subspace with robust scheme. Therefore, weights
are constructed to control the influence of each pixel of an
image.
As described earlier, an image is actually a matrix,
which can be considered as a second order tensor. The
residual error of a new observation X is defined by
R ¼ X �XY2
i¼1
�i UiUTi
� �; ð4Þ
where XQ2
i¼1
�i UiUTi
� �is the approximation of X , so that
the residual error is minimized by
minXI1
i1¼1
XI2
i2¼1
r2i1i2; ð5Þ
where ri1i2 is the element of R. Through the robust M-
estimation method [3], the minimization problem can be
solved by a robust function q(r):
minXI1
i1¼1
XI2
i2¼1
q ri1i2ð Þ: ð6Þ
Differentiating (6) with respect to u, the element of Ui,
we obtain
XI1
i1¼1
XI2
i2¼1
/ ri1i2ð Þdri1i2
du¼ 0; ð7Þ
where /(x) = dq(x)/dx. A weight function can be defined
as w(x) = /(x)/x, and (7) can be written as
XI1
i1¼1
XI2
i2¼1
w ri1i2ð Þri1i2
ori1i2
ou¼ 0: ð8Þ
Then (8) can be regarded as a new problem:
minXI1
i1¼1
XI2
i2¼1
w ri1i2ð Þr2i1i2: ð9Þ
It is crucial to note that weights of each pixel can be
determined with respect to the residual error. We choose
the robust function as
qðxÞ ¼ k2
2log 1þ x
k
� �2� �
; ð10Þ
where k is a scale parameter that controls the convexity of
the function, then we can obtain the weight function
wðxÞ ¼ 1
1þ x=kð Þ2: ð11Þ
The parameter k controls the sharpness of the robust
function and determines what residual errors are treated as
outliers [6]. Now we introduce a method to estimate the
robust parameters.
First a new observation X can be unfolded along mode-n
(n = 1, 2) to matrix Xn. Then we define rj(n) as the standard
deviation of the jth column vector of Xn. Supposing that the
current subspace model is a prototype model, we can make
an approximate evaluation of rj(n) with
rjðnÞ ¼ maxp
i¼1
ffiffiffiffiffiffiffiffikiðnÞ
quijðnÞ��
��; ð12Þ
where ki(n) and uij(n) are the ith eigenvalue of the current
energy matrix and the ith row jth column element of the
current projection matrix in mode-n, respectively. Then k
can be defined as
kjðnÞ ¼ brjðnÞ; ð13Þ
where b is a fixed robust coefficient. b can be set to a high
value (e.g., b = 10) for fast model updating, but at the risk
of accepting outliers into the model. Actually there is no
fixed value for b, it is application-dependent and has to be
chosen empirically.
When new observations arrive, we can obtain the
weights by computing w(rij), and construct weight matrix
in mode-n. Considering that outliers should not be absor-
bed into the background model as much as possible, we
express the weight as
W ¼ minðW1;W2Þ; ð14Þ
where W1, W2 is the weight matrix in mode-1 and mode-2.
The weight matrix characterizes the variation between the
vectors in each mode. Then the weighted tensor can be
defined as
x�ij ¼ffiffiffiffiffiffiwijp
xij: ð15Þ
It is obvious that the weights control the contribution of
new observations to the model ranging from 0 to 1. If
wij = 1, it means that the corresponding pixel is fully
reliable. On the contrary, if wij = 0, it means that the
corresponding pixel is irrelevant to the model.
2.3 Incremental updating for tensor subspace
In this section, we propose an efficient method for incre-
mental subspace learning as new observations arrive by
updating the mean and covariance. It is desirable to focus
more on new images and less on previous observations.
Therefore a forgetting factor a is incorporated to control
the updating rate of tensor subspace [7].
Assuming that the weights on the previous model and
the current observation are a and 1 - a, respectively, we
can update the mean as
Int. J. Mach. Learn. & Cyber. (2011) 2:89–98 91
123
�X0 ¼ a �X þ ð1� aÞX ; ð16Þ
where �X and X are the current mean and the new
observation, respectively. Then the new covariance matrix
of the nth mode is updated sequentially without storing the
previous data, as
C0n ¼ aCn þ ð1� aÞXnXTn ; ð17Þ
where Xn is the mode-n matrix unfolding of the mean-
normalized tensor. The current mode-n covariance matrix
Cn can be presented as
Cn ¼ UnSnUTn ; ð18Þ
where Un and Sn are the previous projection matrix and
energy matrix, respectively. We can obtain the new pro-
jection matrix U0n and energy matrix S0n by applying the
singular value decomposition (SVD) to C0n. The number of
basis is calculated by keeping the energy ratio
S0n
F
�Xnk k2
Flarger than 0.92 to preserve 92% of the
energy for the reconstruction. The pseudo code of the
robust tensor subspace learning algorithm is listed in
Table 1.
3 Anomaly detection algorithm based on robust tensor
subspace learning
3.1 Overview of anomaly detection
Anomaly detection is an active research topic in video
surveillance. With a stationary camera, the background
model of the scene is relatively static or unchanged. If a
moving object stays in the scene, it will be incrementally
learnt in the reconstruction to be part of background; if a
stationary object in the background moves, it will incre-
mentally disappear from the reconstruction and be looked
as a foreground object. Both the scenarios lead to the
background changes, i.e., anomaly.
The framework of anomaly detection in this paper
includes two modules: robust background modeling and
model detection. At the first module, background model is
learned online by the proposed algorithm as new obser-
vations arrive. At the second module, reconstruction
background images are compared with the reference model
to reflect the background changes and detect anomaly. The
architecture of the proposed anomaly detection framework
is shown in Fig. 1.
3.2 Background modeling based on RTSL
Our goal is to apply the robust tensor subspace learning
algorithm to represent the background model of the scene.
Given a training set X ¼ fX 1;X 2; . . .;XNgin RI1�I2 and
the initial projection matrix Unj2n¼1 constrained by
||Un|| = I. The mean of the training set can be formulated
as
�X ¼ 1
N
XN
i¼1
X i: ð19Þ
Table 1 The robust tensor subspace learning algorithm
Input
New observation X 2 RI1�I2
Previous projection matrices U 2n¼1
�� 2 R
Jn�In
Previous energy matrices S 2n¼1
�� 2 R
Jn�Jn
Previous mean �X 2 RI1�I2
Output
New projection matrices U 2n¼1 2 R�� J
0n�In
New energy matrices Sn2n¼1
�� 2 R
J0n�J
0n
New mean �X 2 RI1�I2
Algorithm
Compute the residual error R ¼ X � XQN
i¼1
�iUiUTi
� �
For n = 1 to 2
Compute maximum standard deviation
rjðnÞ ¼ maxp
i¼1
ffiffiffiffiffiffiffiffikiðnÞ
puijðnÞ��
��
Estimate the parameter kjðnÞ ¼ brjðnÞ, where b is a constant
Weight matrix Wn, where wijðnÞ ¼ 1
1þ rij=kijðnÞð Þ2
End
Construct the weights for tensor W = min (W1, W2)
Weighted tensor X�, where X�ij ¼ffiffiffiffiffiffiwijp
xij
Update mean �X0 ¼ a �X þ ð1� aÞX�
For n = 1 to 2
Mode-n unfold X� as Xn 2 RIn�ðQ
d 6¼nIdÞ
Construct the covariance matrix Cn ¼ UnSnUTn
Update covariance matrix C0n ¼ aCn þ ð1� aÞXnXTn
Decomposition ½U0n;S0n� ¼ SVD(C0nÞCompute new projection matrices and energy matrices Un and Sn
End
Tensor subspace learning
Weighting and updating
Training setNew
observations
DetectionSimilarity
measurementReconstruction
Robust background modeling Model detection
Fig. 1 The architecture of the anomaly detection framework
92 Int. J. Mach. Learn. & Cyber. (2011) 2:89–98
123
With the mode-n (n = 1, 2) matrix unfolding Xi(n) of
each image sample X i, the mode-n covariance matrix can
be denoted by
Cn ¼1
N
XN
i¼1
XiðnÞXTiðnÞ: ð20Þ
Then the projection matrix Un and the energy matrix Sn
can be obtained by performing SVD on Cn [7].
Subsequently we can weight the raw observations and
update the mean and covariance incrementally by using
(4)–(18). With decomposing C0n in (17), we can obtain the
updated projection matrix. The reconstruction can be
computed by projecting it into the current tensor subspace
as (2) and (3).
From the above analysis, the model can be learnt effi-
ciently through our method, which is crucial for anomaly
detection at the next module.
3.3 Anomaly detection based on background modeling
The initial background model can be represented as the
normal state or reference model. Whether or not an
anomaly occurs in video can be classified through a simi-
larity measurement between the reference model and the
current reconstruction.
The task of similarity measurement can be separated
into three components: luminance, contrast and structure
[10], denoted as
M ¼ ½lðx; yÞ�g � ½cðx; yÞ�h � ½sðx; yÞ�c; ð21Þ
where g[ 0, h[ 0 and c[ 0 are parameters used to adjust
the importance of the three components. l(x, y), c(x, y) and
s(x, y) are luminance, contrast and structure comparison
functions respectively. M is the value of similarity
measurement. The higher the M value is, the more
similar the two models are. The mean intensity of the
reference model X and the reconstruction Y are lx, ly
estimated by
lx ¼1
N
XN
i¼1
xt; ð22Þ
where N is the number of pixel. The standard deviation is
utilized as an estimate of contrast, given by
rx ¼1
N � 1
XN
i¼1
xi � lxð Þ2 !1=2
: ð23Þ
Then we define the three comparison functions
respectively by
lðx; yÞ ¼2lxly þ C1
l2x þ l2
y þ C1
ð24Þ
cðx; yÞ ¼ 2rxry þ C2
r2x þ r2
y þ C2
ð25Þ
sðx; yÞ ¼ rxy þ C3
rxry þ C3
ð26Þ
where C1, C2 and C3 = C2/2 are non-negative constant,
avoiding instability when the denominator is very close to
zero. rxy can be estimated as
rxy ¼1
N � 1
XN
i¼1
ðxi � liÞðyi � lyÞ: ð27Þ
In particular, we set g = h = c = 1, and substitute (22)-
(27) into (21) to yield
M ¼2lxly þ C1
� �2rxy þ C2
� �
l2x þ l2
y þ C1
� �r2
x þ r2y þ C2
� �: ð28Þ
In this way, consecutive reconstruction background
models are measured with the reference model. The
criterion for anomaly detection is defined as
reconstruction 2 anomaly if ðMn �Mnþ1Þ[ threshold
normal otherwise
�
ð29Þ
We denote Mn as the measurement of the nth frame, if
the difference between Mn and Mn?1 is less than a
threshold, it means the scene is normal, otherwise we
consider that the current background is varying, and
anomaly occurs in the scene. Then the background model
incrementally updates to build a new model. When a stable
background model has been learnt, anomaly ends, and the
reference model should be updated by the new background
model.
4 Experimental results and analysis
In this section, several experiments are carried out to
evaluate the performance of the proposed algorithm for
anomaly detection. Our experiments demonstrate the con-
tribution of the proposed RTSL by comparing with the
incremental and robust subspace learning algorithm [3],
referred as IRSL. Two video sequences are used in the
experiments which recorded by stationary cameras. These
videos are captured outdoor without drastic illumination
changes. Each video consists of 160 9 120 pixel gray scale
images. In video 1, a bicycle enters in a scene and has a
stop for a while, and then it leaves the scene. There are also
a few of walking persons and swaying trees in the scene.
Video 2 contains a person placing a suitcase on the ground
and leaving. Following this the first person comes back and
stands beside his suitcase. Then a second person arrives
Int. J. Mach. Learn. & Cyber. (2011) 2:89–98 93
123
and talks with the first person. The first person leaves the
scene without taking his luggage. The second person takes
the first person’s suitcase and leaves the scene.
In the first experiment, the forgetting factor a, the robust
coefficient b and the threshold for RTSL are chosen as
0.95, 5, and 0.03, respectively. For IRSL, the PCA
dimension p = 20, the update rate a = 0.95, and b = 5.
Figures 2 and 3 show the anomaly detection results by
RTSL and IRSL, respectively. Our proposed algorithm is
able to effectively capture the background changes and
detect the two anomalies when a bicycle stays in and leaves
from the scene, and successfully filter out outliers caused
by some walking people and swaying trees. The anomalous
regions of the representative frames (792, 1031) are high-
lighted by white boxes. In contrast, the IRSL cannot per-
form well. There is a false detection in the 371th frame of
video 1. Due to a slight camera shaking, the vector-based
method fails to build the background model robustly,
which can be further illustrated in Fig. 4. The sharp
declines in the value of similarity measurement correspond
to anomalies, which are highlighted by black boxes. The
false detection is shown in Fig. 4b.
Fig. 2 The anomaly detection
result of RTSL on video 1. a the
original image, b the reference
image when anomaly occurs,
c the updated reference image
when anomaly ends
Fig. 3 The anomaly detection
result of IRSL on video 1. a The
original image, b the reference
image when anomaly occurs,
c the updated reference image
when anomaly ends
94 Int. J. Mach. Learn. & Cyber. (2011) 2:89–98
123
Fig. 4 The comparison of
similarity measurement on
video 1 with RTSL (a) and
IRSL (b)
Fig. 5 The anomaly detection
result of RTSL on video 2.
a The original image, b the
reference image when anomaly
occurs, c the updated reference
image when anomaly ends
Int. J. Mach. Learn. & Cyber. (2011) 2:89–98 95
123
In the second experiment, the forgetting factor a, the
robust coefficient b and the threshold for RTSL are chosen
as 0.957, 6, and 0.03, respectively. For IRSL, the PCA
dimension p = 20, the update rate a = 0.95, and b = 3.
This experiment is for a comparison between RTSL and
IRSL in a more complex scenario. The anomaly detection
results with RTSL are demonstrated in Fig. 5, which is
capable of learning tensor subspace and performing well in
modeling background changes for anomaly detection. This
video actually includes five anomalies, which are suc-
cessfully detected by our proposed algorithm. These rep-
resentative frames (143, 334, 694, 1017, and 1129) are
shown in Fig. 5. As is shown in Fig. 6, the IRSL fails to
capture some of these anomalies. In the 457th frame, there
is a false detection. Furthermore, an anomaly is also
undetected in around the 1017th frame. Figure 7 shows
superior performance of RTSL over IRSL. It is observed
that the IRSL present a more fluctuant result, especially
when anomalies occur. The undetected anomaly is high-
lighted by a dotted box in Fig. 7b.
From the results in these experiments, it is note that the
proposed algorithm provides better performance than
IRSL. The IRSL is an image-as-vector method for learning
a low dimensional linear subspace representation of back-
ground model. Without the local geometric structure
information, the appearance model can be substantially
changed by some extrinsic and intrinsic variation which
could not be considered as an anomaly, making the
Fig. 6 The anomaly detection
result of IRSL on video 2. a The
original image, b the reference
image when anomaly occurs,
c the updated reference image
when anomaly ends
96 Int. J. Mach. Learn. & Cyber. (2011) 2:89–98
123
reconstruction far from the ‘‘real’’ model and causing false-
detected and undetected anomalies. On the contrary, The
RTSL can maintain the spatial structure information of the
2D appearance due to the tensor subspace analysis, capable
of adaptively reflecting the background variation. The
image-as-matrix method is more robust to outliers with a
strong disturbance-tolerable ability. From the similarity
measurement of the two methods, we can find the results of
RTSL show narrower fluctuation than that of IRSL, dem-
onstrating the reliability and stability of RTSL.
5 Conclusion and future work
We have developed a robust tensor subspace learning
algorithm for anomaly detection, which can incrementally
update the tensor subspace and robustly reflect the varia-
tion of background model over time. Most of subspace
learning methods regard an image as a vector in high
dimensional space with the ignorance of the geometry
relationship between pixels in an image. Compared with
these methods, the proposed algorithm is able to make full
use of the spatial structure information in the manner of
tensor representation, and capture the characteristics of
each mode to weight the new data with a robust approach
and filter out the outliers for background modeling. We
detect anomalies by measuring the similarity between
the reference model and the reconstruction frames. The
experimental results demonstrate the effectiveness of the
proposed algorithm.
It is noted that the proposed algorithm preforms well
under the condition of constant illumination. The tensor
formulation is calculated on the intensity which is sensitive
to drastic illumination changes. Reducing the effect of
illumination variations on appearance model to enhance the
robustness is required in our future work.
Acknowledgements We want to thank the helpful comments and
suggestions from the anonymous reviewers. This research was
supported partially by the National Natural Science Foundation of
China under Grant 60832005; by the Ph.D. Programs Foundation of
Ministry of Education of China under Grant 20090203110002; by the
Key Science and Technology Program of Shaanxi Province of China
under Grant. 2010K06-12; and by the Natural Science Basic Research
Plan in Shaanxi Province of China under Grant 2009JM8004.
References
1. He X, Cai D, Niyogi P (2005) Tensor subspace analysis. In:
Proceedings of the Conference on Advance in Neural Information
Processing Systems, pp 1–8
2. Li X, Hu W, Zhang Z, Zhang X, Luo G (2007) Robust visual
tracking based on incremental tensor subspace learning. In:
Proceedings of the 11th International Conference on Computer
Vision, pp 1–8
3. Li Y, Xu L, Morphett J, Jacobs R (2004) On incremental and
robust subspace learning. Pattern Recognit 37(7):1509–1518
4. Oliver N, Rosario B, Pentland AP (2000) A Bayesian computer
vision system for modeling human interactions. IEEE Trans
Pattern Anal Mach Intell 22(8):831–841
5. Ross D, Lim J, Lin R, Yang M (2008) Incremental learning for
robust visual tracking. Int J Comput Vis 77(1–3):125–141
6. Skocaj D, Leonardis A (2003) Weighted and robust incremental
method for subspace learning. In: Proceedings of the 9th IEEE
International Conference on Computer Vision, pp 1494–1501
7. Sun J, Tao D, Faloutsos C (2006) Beyond streams and graphs:
dynamic tensor analysis. In: Proceedings of the 12th ACM
SIGKDD International Conference on Knowledge Discovery and
Data Mining, pp 374–383
8. Tao D, Song M, Li X, Shen J, Sun J, Wu X, Faloutsos C,
Maybank SJ (2008) Bayesian tensor approach for 3-D face
modeling. IEEE Trans Circuits Syst Video Technol
18(10):1397–1410
9. Torre F, Black M (2003) A framework modeling for robust
subspace learning. Int J Comput Vis 54(1–3):117–142
10. Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality
assessment: from error visibility to structural similarity. IEEE
Trans Image Process 3(4):600–612
11. Wen J, Gao X, Li X, Tao D (2009) Incremental learning of
weighted tensor subspace for visual tracking. In: Proceedings of
the 2009 IEEE International Conference on Systems, Man, and
Cybernetics, pp 3788–3793
12. Wen J, Gao X, Yuan Y, Tao D, Li J (2010) Incremental tensor
biased discriminant analysis: a new color-based visual tracking
method. Neurocomput 3(4–6):827–839
Fig. 7 The comparison of
anomaly detection with RTSL
(a) and IRSL (b) on video 2
Int. J. Mach. Learn. & Cyber. (2011) 2:89–98 97
123
13. Vasilescu M, Terzopoulos D (2003) Multilinear subspace anal-
ysis of image ensembles. In: Proceedings of the 2003 IEEE
International Conference on Computer Vision and Pattern Rec-
ognition, pp 93–99
14. Xu L, Yuille A (1995) Robust principal analysis by self-orga-
nizing rules based on statistical physics approach. IEEE Trans
Neural Netw 6(1):131–143
98 Int. J. Mach. Learn. & Cyber. (2011) 2:89–98
123