23. 1: 2 p(x , x x , , x ) = p(x , x ) = dx dx p(x) 1 2 3 M p(x
, , x )3 M p(x) 1 2 3 M 23
24. 2: x , x p(x , x x , , x ) = x , x 1 2 1 2 3 M (2)M/2 det
1/2 p(x , , x )3 M exp x x{ 2 1 } 1 2 24
25. x x = x x = x x + 2 x x + x x = x + 2x x + x + 2x x + 2x x
+ const. = i=1 M i j=1 M i,j j i=1 2 i j=1 2 i,j j i=1 2 i j=3 M
i,j j i=3 M i j=3 M i,j j 1 2 1,1 1 2 1,2 2 2 2,2 1 j=3 M j 1,j 2
j=3 M j 2,j i,j j,i 25
26. x , x p(x x , , x ) exp x + 2x x p(x x , , x ) exp x + 2x x
1 2 1 3 M { 2 1 ( 1 2 1,1 1 j=3 M j 1,j)} 2 3 M { 2 1 ( 2 2 2,2 2
j=3 M j 2,j)} 26
27. p(x , x x , , x ) exp x + 2x x +x + 2x x + 2x x p(x x , , x
) exp x + 2x x p(x x , , x ) exp x + 2x x 1 2 3 M { 2 1 ( 1 2 1,1 1
2 1,2 2 2 2,2 1 j=3 M j 1,j 2 j=3 M j 2,j)} 1 3 M { 2 1 ( 1 2 1,1 1
j=3 M j 1,j)} 2 3 M { 2 1 ( 2 2 2,2 2 j=3 M j 2,j)} 27
28. x , x p(x , x x , , x ) = p(x x , , x )p(x x , , x ) = 0 x
x othervariables (i, j) i j 1 2 3 M 1 3 M 2 3 M i,j i j 28
29. 3: , , S , , = arg w W w : w s ~ [ L l l ] ~ [ W w w ] S ~
[ R s s r ] w^ w min { 1 } 29
30. = arg W b + , where b W s, W w Lasso ^ min { 2 1 1/2 2 1}
1/2 1 30
31. 4: a (x) ln p(x x ) = ln = ln(2) ln + (x x x Lx ) = ln +
(2x l x + x ) = ln + l x + x i i i (2)(M1)/2 det L1/2 (2)M/2 det
1/2 exp x Lx{ 2 1 i i} exp x x{ 2 1 } 2 1 2 1 det L det 2 1 i i 2 1
( l L l 1 2 ) 2 1 i i i 2 2 1 2 2 1 ( i i) 2 31
34. [5] J. Friedman et al. Sparse inverse covariance estimation
with the graphical lasso. Biostatistics, 9:432441, 2008. [6] O.
Banerjee et al. Model Selection Through Sparse Maximum Likelihood
Estimation for Multivariate Gaussian or Binary Data. Journal of
Machine Learning Research, 9:485516, 2008 [5] Lasso [6] [5] 34
35. [7] T. Ide et al. Sparse Gaussian Markov Random Field
Mixtures for Anomaly Detection. Proceedings of the 2016 IEEE
International Conference on Data Mining, 955960, 2016 [8] S. Liu et
al. Direct Learning of Sparse Changes in Markov Networks by Density
Ratio Estimation. Neural Computation, 26:11691197, 2014 [7] Lasso
[8] Lasso 35