Upload
mitchell-peters
View
231
Download
0
Tags:
Embed Size (px)
Citation preview
1
Facial Expression Recognition using KCCA with Combining Correlation
Kernels and Kansei Information
Yo Horikawa
Kagawa University , Japan
2
1. Purpose of this study
Apply kernel canonical correlation analysis (kCCA) with correlation kernels to facial expression recognition
・ Combining multi-order correlation kernels
・ Use of Kansei information
3
2. Facial expression recognition
Facial images → Expressions
Basic six expressions:
happiness, sadness, surprise, anger, disgust, fear
Happy Sad Surprised Angry Disgusted Fearful Neutral
4
3. Kernel canonical correlation analysis (KCCA)
Pairs of feature vectors of sample objects: (xi, yi) (1 ≤ i ≤ n)
Canonical variates (u, v): projections with the maximum correlation between (implicit) nonlinear functions h(xi ) and h’(yi ).
u = wφ ・ h(x) v = wθ ・ h’(y) u
h(x)h’(y) v
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
5
Canonical variates are calculated with the kernel function.
u = ∑i=1n fiφ(xi, x)
v = ∑i=1n giθ(yi, y)
Kernel functions: the inner products of implicit functions of x and y
φ(xi, xj) = h(xi ) ・ h(xj ) θ(yi, yj) = h’(yi ) ・ h’(yj )
f = t(f1, ∙∙∙, fn) and g = t(g1, ∙∙∙, gn): the eigenvectors of the generalized eigenvalue problem:
0 ΦΘ f Φ2+γxI 0 f = λ ΘΦ 0 g 0 Θ2+γyI g
Φ=Φij =φ(xi, xj) Θ=Θij =θ(yi, yj) (1 ≤ i, j ≤ n)
I: Identity matrix of n×n
6
4. KCCA for classification problems and use of Kansei information
4.1 CCA for classification problems
Use an indicator vector (IV) as the second feature vector y.
y = (y1, ∙∙∙, ync) corresponding to x: yc = 1 if x belongs to class c yc = 0 otherwise
(nc: the number of classes) In the recognition of 6 basic facial expressions
(happiness, sadness, surprise, anger, disgust, fear, neutral)
y = ( 1 , 0 , 0 , 0 , 0 , 0 , 0 )
7
Canonical variates ur (1 ≤ r ≤ nc-1) for a new object (x, ?) are calculated by
ur =∑i=1n friφ(xi, x) (1 ≤ r ≤ nc-1)
Standard classification methods are applied in the canonical variate space (u1, …, unc-1).
KCCA
IV
(1, 0, 0, 0, 0, 0, 0) y
x ImageCanonical variates
(u1, …, u6)
- 0.20
- 0.15
- 0.10
- 0.05
0.00
0.05
0.10
0.15
- 0.15 - 0.10 - 0.05 0.00 0.05 0.10 0.15
u1
u2
- 0.15
- 0.10
- 0.05
0.00
0.05
0.10
0.15
- 0.15 - 0.10 - 0.05 0.00 0.05 0.10 0.15
u3
u4
- 0.15
- 0.10
- 0.05
0.00
0.05
0.10
0.15
0.20
- 0.20 - 0.15 - 0.10 - 0.05 0.00 0.05 0.10 0.15 0.20
u5
u6
8
4.2 Kansei information and its use in KCCA
Semantic ratings on the expressions of facial images by human
Ratings on the basic six expressions: (happiness, sadness, surprise, anger, disgust, fear)
( 5 , 2 , 3, 1, 0, 1 )
Semantic rating vector (SRV)
KCCACanonical variates
(u1, …, u5)
SRV
(4.39, 1.35, 2.29, 1.16, 1.23, 1.26) y
x Image
Classifiers
9
5. Correlation kernel
Correlation kernel: an inner product of the autocorrelation functions of the feature vectors.
rxi(t1) =∫xi(t)xi(t+t1)dt rxj(t1) =∫xj(t)xj(t+t1)dt
φ(xi, xj) =∫rxi(t1) ・ rxj(t1) dt1
xi(t) xj(t)
-1.5
-1
-0.5
0
0.5
1
1.5
2
0 0.5 1 1.5 2 2.5 3 3.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
0 0.5 1 1.5 2 2.5 3 3.5
-1
-0.5
0
0.5
1
1.5
2
2.5
-1.5 -1 -0.5 0 0.5 1 1.5
0
0.2
0.4
0.6
0.8
1
1.2
-1.5 -1 -0.5 0 0.5 1 1.5
10
The kth-order autocorrelation of data xi(t):
rxi(t1, t2, ∙∙∙ , tk-1) = ∫xi(t)xi(t+t1) ・・・ xi(t+tk-1)dt
The inner product between rxi and rxj is calculated with the k-th power of the 2nd-order cross-correlation function:
rxi・ rxj =∫{ccxi, xj(t1)}k dt1
ccxi, xj(t1) =∫xi(t)xj(t+t1)dt
The calculation of explicit values of the autocorrelations is avoided. → Higher-order autocorrelations are tractable with practical computational cost.
Linear correlation kernel:φ(xi(t), xj(t)) = rxi・ rxj
11
Calculation of correlation kernels rxi・ rxj for 2-dimensional image data: x(l, m) (1≤ l ≤ L, 1≤ m ≤ M)
・ Calculate the cross-correlations between xi(l, m) and xj(l, m):
ccxi, xj(l1, m1) = ∑l=1L-l1∑m=1
M-m1 xi(l, m)xj(l+l1, m+m1)/(LM)
(1 ≤ l1 ≤ L1, 1 ≤ m1 ≤ M1)
・ Sum up the kth-power of the cross-correlations:
rxi・ rxj = ∑l1=0L1-1∑m1=0
M1-1 {ccxi, xj(l1, m1)}k /(L1M1)
12
Modified correlation kernels Higher-order and odd-order correlation kernels are less performed.
Correlation kernel (C) : ∑l1, m1 ccxi, xj(l1, m1)k (14)
→ Modified correlation kernels with the kth root and absolute values
Lp norm kernel (P):
rxi∙rxj = sgn(ccxi, xj(l1, m1))|∑l1,m1{ccxi, xj(l1, m1)}k|1/k (15)
Absolute correlation kernel (A): rxi∙rxj = ∑l1, m1 |ccxi, xj(l1, m1)|k (16)
Absolute Lp norm kernel (AP): rxi∙rxj = |∑l1, m1{ccxi, xj(l1, m1)}k |1/k (17)
Absolute Lp norm absolute kernel (APA):
rxi∙rxj = |∑l1, m1|ccxi, xj(l1, m1)|k|1/k (18)
Max norm kernel (Max): rxi∙rxj = max l1, m1 ccxi, xj(l1, m1) (19)
Max norm absolute kernel (MaxA): rxi∙rxj = max l1, m1 |ccxi, xj(l1, m1)| (20)
13
6. Combining correlation kernels
Multiple classifiers may give higher performance than a single classifiers.
Cartesian spaces of the canonical variates obtained with a set of the kernel functions
e.g.,
U = (u1, ···, unc-1), U’ = (u’1, ···, u’nc-1), U” = (u”1, ···, u”nc-1)
→ U U’ U” = (⊗ ⊗ u1, ···, unc-1, u’1, ···, u’nc-1, u”1, ···, u”nc-1)
Classifier 1 ‘7’
Classifier 2 ‘9’
Classifier 3 ‘7’
Classifier 4 ‘7’ ・・・
Classifier M ‘1’
Objects
Final decision ‘7’
14
7. Facial expression recognition experiment
Object: JAFFE (Japanese female facial expression) database
213 facial images of 10 Japanese females
3 or 4 examples of each of 6 basic facial expressions
(happiness, sadness, surprise, anger, disgust, fear)
and a neutral face
8bit gray scale valued of 256×256 pixels
Happy Sad Surprised Angry Disgusted Fearful Neutral
Figure 1. Sample images in JAFFE database. From left to right: happiness, sadness, surprise, anger, disgust, fear, neutral.
15
ALL 213 images in JAFFE database
16
Center regions of 200×200 pixels are taken.
They are resized to 20×20 pixels with averaging of 10×10 pixels.
Preprocessing: linear normalization with the mean 0 and SD 1.0.
Happy Sad Surprised Angry Disgusted Fearful Neutral
Figure 2. Images of 20×20 real valued matrix data of Fig. 1 as the first feature x in kCCA.
17
Semantic rating vectors (SRVs) in JAFFE database
Averages of semantic ratings on 6 expressions (happiness, sadness, surprise, anger, disgust, fear) on 5 point scales obtained from 60 Japanese females
(4.39, 1.35, 2.29, 1.16, 1.23, 1.26)
Happy
(HAP, SAD, SUR, ANG, DIS, FEA)
(4.77, 1.29, 2.45, 1.26, 1.23, 1.23)
(1.39, 3.97, 1.68, 2.19, 3.68, 3.61)
Images SRV
Sad
Happy
(2.87, 1.55, 4.68, 1.52, 1.52,1.65 )
Surprised
(HAP, SAD, SUR, ANG, DIS, FEA)
(1.55, 1.90, 2.10, 4.32, 3.90, 1.81)
(3.03, 2.16, 2.06, 1.94, 1.84, 1.87)
Images SRV
Neutral
Angry
18
Experiment ( ) ⅠFacial expression recognition with 2 images for each expression per person Sample set: 2 images×7 expressions×10 persons
= 140 images Test set: the remaining 73 images
Experiment ( ) ⅡFacial expression recognition with a leave one-person out method Sample set: Images of 9 persons (about 190 images) Test set: Images of the remaining 1 person
(about 20 images)Averaging 10 tests
19
The kernel function φ and the second feature vector y is shown with the set (φ, y) using the following symbols.
For the kernel function φ of image data Kth-order correlation kernel: Ck Kth-order Lp norm kernel: Pk Kth-order absolute correlation kernel: Ak Max norm kernel: Max etc.,
Total 44 kinds
For the second feature vector y Indicator vector: IV Impression vector: SRV
E.g., (C2, SRV) is the 2nd-order correlation kernel and the semantic rating vectors.
Classifiers in the canonical space: Nearest neighbor method
20
8. Results of the experiment
Experiment ( ) ⅠCorrect classification rates (CCRs) with single classifiers
0.6
0.7
0.8
0.9
1.0
LG
0.1
G1
G10 C2
C3
C4
C5
C6
C7
C8
C9
C10 P2
P3
P4
P5
P6
P7
P8
P9
P10 A3
A5
A7
A9
AP3
AP5
AP7
AP9
APA3
APA5
APA7
APA9 M K S3
S4
S5
S6
S7
S8
S9
S10
Max
Kernel
CC
R
IVSRV
Figure 3. CCR with single kernel functions of 44 kinds in the experiment (I). IV: indicator vector, SRV: semantic rating vector.
Highest CCR (the most right-hand side) is obtained with Indicator vectors (IV), not with semantic rating vectors (SRV).
Highest (SRV) 89% Highest (IV) 94.5%
21
Experiment No ofkernels
Kind of2nd vector Kernels of highest CCR CCR (%)
HighestCCR (%) inthe paststudies
IV (A5, I) 94.5SRV (C4, SRV) 89.0IV (C2. IV), (C5, IV) 95.9
IV + SRV (L, SRV), (Q3, SRV) 95.9IV (L, IV), (A9, IV), (S4, IV) 95.9
IV + SRV (C5, IV), (C10, IV), (C2, SRV) 97.3
(Ⅰ )
1
94.6 [5]2
3
Table 1( ). Highest CCR with single kernel functions and combining Ⅰtwo and three kernel functions in the experiment ( ).Ⅰ
Combining correlation kernels increases CCRs.
Highest CCR (97.3%) is superior to the past studies (94.6%)
It is obtained with not only indicator vectors (IV) but also semantic rating vectors (SRV).
22
Experiment ( ) Ⅱ
Figure 4. Highest CCRs with kernel functions of 44 kinds in 10 tests in the experiment (II). Single kernel (a), combining two (b) and three (c) kernels.
IV: indicator vector, SRV: semantic rating vector.
(a) 1 kernel
0.40.50.60.70.80.91.0
1 2 3 4 5 6 7 8 9 10 Ave
Test No.
CC
R IVSRV
(b) 2 kernels
0.4
0.50.6
0.70.8
0.91.0
1 2 3 4 5 6 7 8 9 10 AveTest No.
CC
R IVIV + SRV
(c) 3 kernels
0.4
0.50.6
0.7
0.80.9
1.0
1 2 3 4 5 6 7 8 9 10 AveTest No.
CC
R IVIV + SRV
Highest average CCR (the most right-hand side in Fig. 4(c)) is again obtained with combining 3 kernels including semantic rating vectors (SRV).
23
Table 1( ). Highest CCR with single kernel functions Ⅱand combining two and three kernel functions in the experiment ( ).Ⅱ
Experiment No ofkernels
Kind of2nd vector Kernels of highest CCR CCR (%)
HighestCCR (%) inthe paststudies
IV (P10, I) 51.1SRV (P10, SRV) 46.8IV (L, I), (P5, I) 63.3
IV + SRV (P5, I), (G1, SRV) 61.2IV (L, I), (C9, I), (M, I) 65.2
IV + SRV (A7, I), (M, I), (C9, SRV) 67.0
(Ⅱ )
1
(77.0 [6])2
3
Highest CCR (67.0%) is again obtained with combining 3 kernels including not only indicator vectors but also semantic rating vectors (SRV).
24
9. Conclusion
KCCA with multiple correlation kernels and Kansei information was applied to facial expression recognition through the experiment with JAFFE database.
High correct classification rates (CCRs) equivalent to the past studies were obtained with correlation kernel CCA without any feature extraction.
Combining multiple correlation kernels and Kansei information with the semantic rating vectors contributed to increase CCRs.