13
Wang et al. Journal of Inequalities and Applications (2016) 2016:296 DOI 10.1186/s13660-016-1231-9 RESEARCH Open Access Least-squares Hermitian problem of complex matrix equation (AXB, CXD)=(E , F ) Peng Wang 1,2* , Shifang Yuan 1 and Xiangyun Xie 1 * Correspondence: [email protected] 1 School of Mathematics and Computational Science, Wuyi University, Jiangmen, 529020, P.R. China 2 School of Computer Science and Technology, Wuhan University of Technology, Wuhan, 430070, P.R. China Abstract In this paper, we present a direct method to solve the least-squares Hermitian problem of the complex matrix equation (AXB, CXD)=(E, F) with complex arbitrary coefficient matrices A, B, C, D and the right-hand side E, F. This method determines the least-squares Hermitian solution with the minimum norm. It relies on a matrix-vector product and the Moore-Penrose generalized inverse. Numerical experiments are presented which demonstrate the efficiency of the proposed method. Keywords: matrix equation; least-squares solution; Hermitian matrices; Moore-Penrose generalized inverse 1 Introduction Let A, B, C, D, E, and F are given matrices of suitable sizes defined over the complex number field. We are interested in the analysis of the linear matrix equation (AXB, CXD)=(E, F ) () to be solved for X C n×n . Here and in the following C m×n denotes the set of all m × n complex matrices, while the set of all m × n real matrices is denoted by R m×n . In particular, we focus on the least-squares Hermitian solution with the least norm of (), which can be described as follows. Problem Given A C m×n , B C n×s , C C m×n , D C n×t , E C m×s , and F C m×t , let H L = X|X HC n×n , AXB E + CXD F = min X HC n×n ( AX B E + CX D F ) . Find X H H L such that X H = min XH L X, () where HC n×n denotes the set of all n × n Hermitian matrices. © Wang et al. 2016. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, pro- vided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 DOI 10.1186/s13660-016-1231-9

R E S E A R C H Open Access

Least-squares Hermitian problem ofcomplex matrix equation (AXB, CXD) = (E, F)Peng Wang1,2* , Shifang Yuan1 and Xiangyun Xie1

*Correspondence:[email protected] of Mathematics andComputational Science, WuyiUniversity, Jiangmen, 529020, P.R.China2School of Computer Science andTechnology, Wuhan University ofTechnology, Wuhan, 430070, P.R.China

AbstractIn this paper, we present a direct method to solve the least-squares Hermitianproblem of the complex matrix equation (AXB,CXD) = (E, F) with complex arbitrarycoefficient matrices A, B, C, D and the right-hand side E, F. This method determinesthe least-squares Hermitian solution with the minimum norm. It relies on amatrix-vector product and the Moore-Penrose generalized inverse. Numericalexperiments are presented which demonstrate the efficiency of the proposedmethod.

Keywords: matrix equation; least-squares solution; Hermitian matrices;Moore-Penrose generalized inverse

1 IntroductionLet A, B, C, D, E, and F are given matrices of suitable sizes defined over the complexnumber field. We are interested in the analysis of the linear matrix equation

(AXB, CXD) = (E, F) ()

to be solved for X ∈ Cn×n. Here and in the following Cm×n denotes the set of all m × ncomplex matrices, while the set of all m×n real matrices is denoted by Rm×n. In particular,we focus on the least-squares Hermitian solution with the least norm of (), which can bedescribed as follows.

Problem Given A ∈ Cm×n, B ∈ Cn×s, C ∈ Cm×n, D ∈ Cn×t , E ∈ Cm×s, and F ∈ Cm×t , let

HL ={

X|X ∈ HCn×n,

‖AXB – E‖ + ‖CXD – F‖ = minX∈HCn×n

(‖AXB – E‖ + ‖CXD – F‖)}.

Find XH ∈ HL such that

‖XH‖ = minX∈HL

‖X‖, ()

where HCn×n denotes the set of all n × n Hermitian matrices.

© Wang et al. 2016. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in anymedium, pro-vided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, andindicate if changes were made.

Page 2: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 2 of 13

Correspondingly, the set of all n × n real symmetric matrices and the set of all n × n realanti-symmetric matrices are denoted by SRn×n and ASRn×n, respectively.

Nowadays, matrix equations are very useful in numerous applications such as controltheory [, ], vibration theory [], image processing [, ] and so on. Therefore it is anactive area of research to solve different matrix equations [, –]. For the real matrixequation (), in [], least-squares solutions with the minimum-norm were obtained byusing generalized singular value decomposition and canonical correlation decompositionof matrices (CCD). In [], the quaternion matrix equation () was considered and theleast-squares solution with the least norm was given through the use of the Kroneckerproduct and the Moore-Penrose generalized inverse. Though the matrix equations of theform () are studied in the literature, less or even no attention was paid to the least-squaresHermitian solution of () over the complex field which is studied in this paper. A specialvectorization, as we defined in [], of the matrix equation () is carried out and Problem is turned into the least-squares unconstrained problem of a system of real linear equations.

The notations used in this paper are summarized as follows: For A ∈ Cm×n, the sym-bols AT , AH and A+ denote the transpose matrix, the conjugate transpose matrix and theMoore-Penrose generalized inverse of matrix A, respectively. The identity matrix is de-noted by I . For A = (aij) ∈ Cm×n, B = (bij) ∈ Cp×q, the Kronecker product of A and B isdefined by A ⊗ B = (aijB) ∈ Cmp×nq. For matrix A ∈ Cm×n, the stretching operator vec(A)is defined by vec(A) = (a, a, . . . , an)T , where ai is the ith column of A.

For all A, B ∈ Cm×n, we define the inner product 〈A, B〉 = tr(AHB). Then Cm×n is a Hilbertinner product space and the norm of a matrix generated by this inner product is the matrixFrobenius norm ‖ · ‖. Further, denote the linear space Cm×n × Cm×t = {[A, B]|A ∈ Cm×n,B ∈ Cm×t}, and for the matrix pairs [Ai, Bi] ∈ Cm×n × Cm×t (i = , ), we can define theirinner product as follows: 〈[A, B], [A, B]〉 = tr(AH

A) + tr(BH B). Then Cm×n × Cm×t is

also a Hilbert inner space. The Frobenius norm of the matrix pair [A, B] ∈ Cm×n × Cm×t

can be derived:

∥∥[A, B]∥∥ =

√⟨[A, B], [A, B]

=√

tr(AH A

)+ tr

(BHB

)

=√

‖A‖ + ‖B‖.

The structure of the paper is the following. In Section , we deduce some results inHilbert inner product Cm×n × Cm×t which are important for our main results. In Sec-tion , we introduce a matrix-vector product for the matrices and vectors of Cm×n basedon which we consider the structure of (AXB, CXD) over X ∈ HCn×n. In Section , we de-rive the explicit expression for the solution of Problem . Finally we express the algorithmfor Problem and perform some numerical experiments.

2 Some preliminariesIn this section, we will prove some theorems which are important for the proof of our mainresult. Given A ∈ Rm×n and b ∈ Rn, the solution of the linear equations Ax = b involves twocases: inconsistent and consistent. The former leads to the solution in the least-squaressense, which can be expressed as argminx∈Rn ‖Ax – b‖. This problem can be solved by

Page 3: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 3 of 13

solving the corresponding normal equations:

AT Ax = AT b, ()

moreover, we have

{x|x ∈ Rn,‖Ax – b‖ = min

}=

{x|x ∈ Rn, AT Ax = AT b

}.

As for the latter, () still holds, further, the solution set of Ax = b and that of () is the same,that is,

{x|x ∈ Rn, Ax = b

}=

{x|x ∈ Rn, AT Ax = AT b

}.

It is should be noticed that () is always consistent. Therefore, solving Ax = b is usuallytranslated into solving the corresponding consistent equations (). In the following, wewill extend the conclusion to a more general case. To do this, we first give the followingproblem.

Problem Given complex matrices A, A, . . . , Al ∈ Cm×n, B, B, . . . , Bl ∈ Cm×n, C ∈Cm×n, and D ∈ Cm×n, find k = (k, k, . . . , kl)T ∈ Rl such that

l∑i=

ki[Ai, Bi] =

[ l∑i=

kiAi,l∑

i=

kiBi

]= [C, D]. ()

Theorem Assume the matrix equation () in Problem is consistent. Let

Ei =

⎡⎢⎢⎢⎣

Re(Ai)Im(Ai)Re(Bi)Im(Bi)

⎤⎥⎥⎥⎦ , F =

⎡⎢⎢⎢⎣

Re(C)Im(C)Re(D)Im(D)

⎤⎥⎥⎥⎦ .

Then the set of vectors k that satisfies () is exactly the set that solves the following consistentsystem:

⎡⎢⎢⎢⎢⎢⎢⎣

〈E, E〉 〈E, E〉 · · · 〈E, El〉〈E, E〉 〈E, E〉 · · · 〈E, El〉

......

...

〈El, E〉 〈El, E〉 · · · 〈El, El〉

⎤⎥⎥⎥⎥⎥⎥⎦

⎡⎢⎢⎢⎢⎢⎢⎣

k

k

...

kl

⎤⎥⎥⎥⎥⎥⎥⎦

=

⎡⎢⎢⎢⎢⎢⎢⎣

〈E, F〉〈E, F〉

...

〈El, F〉

⎤⎥⎥⎥⎥⎥⎥⎦

. ()

Proof By (), we have

[ l∑i=

kiAi,l∑

i=

kiBi

]= [C, D]

⇐⇒l∑

i=

ki[Ai, Bi] = [C, D]

Page 4: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 4 of 13

⇐⇒l∑

i=

ki

⎡⎢⎢⎢⎣

Re(Ai)Im(Ai)Re(Bi)Im(Bi)

⎤⎥⎥⎥⎦ =

⎡⎢⎢⎢⎣

Re(C)Im(C)Re(D)Im(D)

⎤⎥⎥⎥⎦

⇐⇒

⎡⎢⎢⎢⎢⎢⎢⎣

vec(Re A)T vec(Im A)T vec(Re B)T vec(Im B)T

vec(Re A)T vec(Im A)T vec(Re B)T vec(Im B)T

......

......

vec(Re Al)T vec(Im Al)T vec(Re Bl)T vec(Im Bl)T

⎤⎥⎥⎥⎥⎥⎥⎦

×

⎡⎢⎢⎢⎢⎢⎢⎣

vec(Re A) vec(Re A) · · · vec(Re Al)

vec(Im A) vec(Im A) · · · vec(Im Al)

vec(Re B) vec(Re B) · · · vec(Re Bl)

vec(Im B) vec(Im B) · · · vec(Im Bl)

⎤⎥⎥⎥⎥⎥⎥⎦

k

=

⎡⎢⎢⎢⎢⎢⎢⎣

vec(Re A)T vec(Im A)T vec(Re B)T vec(Im B)T

vec(Re A)T vec(Im A)T vec(Re B)T vec(Im B)T

......

......

vec(Re Al)T vec(Im Al)T vec(Re Bl)T vec(Im Bl)T

⎤⎥⎥⎥⎥⎥⎥⎦

×

⎡⎢⎢⎢⎣

vec(Re C)vec(Im C)vec(Re D)vec(Im D)

⎤⎥⎥⎥⎦

⇐⇒

⎡⎢⎢⎢⎢⎢⎢⎣

〈E, E〉 〈E, E〉 · · · 〈E, El〉〈E, E〉 〈E, E〉 · · · 〈E, El〉

......

...

〈El, E〉 〈El, E〉 · · · 〈El, El〉

⎤⎥⎥⎥⎥⎥⎥⎦

⎡⎢⎢⎢⎢⎢⎢⎣

k

k

...

kl

⎤⎥⎥⎥⎥⎥⎥⎦

=

⎡⎢⎢⎢⎢⎢⎢⎣

〈E, F〉〈E, F〉

...

〈El, F〉

⎤⎥⎥⎥⎥⎥⎥⎦

.

Thus we have (). �

We now turn to the case that matrix equation () in Problem is inconsistent, just as(), the related least-squares problem should be considered.

Problem Given matrices A, A, . . . , Al ∈ Cm×n, B, B, . . . , Bl ∈ Cs×t , C ∈ Cm×n, and D ∈Cs×t , find

k = (k, k, . . . , kl)T ∈ Rl

such that

‖kA + kA + · · · + klAl – C‖ + ‖kB + kB + · · · + klBl – D‖ = min . ()

Page 5: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 5 of 13

Based on the results above, we list the following theorem which concludes that solvingProblem is equivalent to solving the consistent matrix equation system ().

Theorem Suppose that the notations and conditions are the same as in Theorem . Thenthe solution set of Problem is the solution set of system ().

Proof By (), we have

∥∥∥∥∥

[ l∑i=

kiAi – C,l∑

i=

kiBi – D

]∥∥∥∥∥

=

∥∥∥∥∥

( l∑i=

ki(Re Ai) – (Re C)

)+

√–

( l∑i=

ki(Im Ai) – (Im C)

)∥∥∥∥∥

+

∥∥∥∥∥

( l∑i=

ki(Re Bi) – (Re D)

)+

√–

( l∑i=

ki(Im Bi) – (Im D)

)∥∥∥∥∥

=

∥∥∥∥∥∥∥∥∥

l∑i=

ki

⎡⎢⎢⎢⎣

Re(Ai)Im(Ai)Re(Bi)Im(Bi)

⎤⎥⎥⎥⎦ –

⎡⎢⎢⎢⎣

Re(C)Im(C)Re(D)Im(D)

⎤⎥⎥⎥⎦

∥∥∥∥∥∥∥∥∥

=

∥∥∥∥∥l∑

i=

kiEi – F

∥∥∥∥∥

.

It follows that () holds. This implies the conclusion. �

3 The structure of (AXB, CXD) over X ∈ HCn×n

Based on the discussion in [], we first recall a matrix-vector product for vectors andmatrices in Cm×n.

Definition Let x = (x, x, . . . , xk)T ∈ Ck , y = (y, y, . . . , yk)T ∈ Ck and A = (A, A, . . . , Ak),Ai ∈ Cm×n (i = , , . . . , k). Define

(i) A ◦ x = xA + xA + · · · + xkAk ∈ Cm×n;(ii) A ◦ (x, y) = (A ◦ x, A ◦ y).

From Definition ., we list the following facts which are useful for solving Problem .Let y = (y, y, . . . , yk)T ∈ Ck , P = (P, P, . . . , Pk), Pi ∈ Cm×n (i = , , . . . , k), and a, b ∈ C.

(i) xH ◦ y = xHy = (x, y);(ii) A ◦ x + P ◦ x = (A + P) ◦ x;

(iii) A ◦ (ax + by) = a(A ◦ x) + b(A ◦ y);(iv) (aA + bP) ◦ x = a(A ◦ x) + b(P ◦ x);(v) (A, P) ◦ [ x

y ] = A ◦ x + P ◦ y;(vi) [ A

P ] ◦ x = [ A◦xP◦x ];

(vii) vec(A ◦ x) = (vec(A), vec(A), . . . , vec(Ak))x;(viii) vec((aA + bP) ◦ x) = a vec(A ◦ x) + b vec(P ◦ x).

Suppose B = (B, B, . . . , Bs) ∈ Ck×s, Bi ∈ Ck (i = , , . . . , s), C = (C, C, . . . , Ct) ∈ Ck×t , Ci ∈Ck (i = , , . . . , t), D ∈ Cl×m, H ∈ Cn×q. Then

Page 6: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 6 of 13

(ix) D(A ◦ x) = (DA) ◦ x;(x) (A ◦ x)H = (AH , AH , . . . , AkH) ◦ x.

For X = Re X +√

– Im X ∈ HCn×n, by XH = X, we have (Re X +√

– Im X)H = Re X +√– Im X. Thus we get Re XT = Re X, Im XT = – Im X.

Definition For matrix A ∈ Rn×n, let a = (a, a, . . . , an), a = (a, a, . . . , an), . . . ,an– = (a(n–)(n–), an(n–)), an = ann, the operator vecS(A) is denoted

vecS(A) = (a, a, . . . , an–, an)T ∈ Rn(n+)

. ()

Definition For matrix B ∈ Rn×n, let b = (b, b, . . . , bn), b = (b, b, . . . , bn), . . . ,bn– = (b(n–)(n–), bn(n–)), bn– = bn(n–), the operator vecA(B) is denoted

vecA(B) = (b, b, . . . , bn–, bn–)T ∈ Rn(n–)

. ()

Let

Eij = (est) ∈ Rn×n, ()

where

est =

⎧⎨⎩

(s, t) = (i, j),

otherwise.

Let

KS = (E, E + E, . . . , En + En,

E, E + E, . . . , En + En, . . . , E(n–)(n–), En(n–) + E(n–)n, Enn). ()

Note that KS ∈ Rn× n(n+) .

Let

KA = (E – E, . . . , En – En, E – E, . . . , En – En, . . . , En(n–) – E(n–)n). ()

Note that KA ∈ Rn× n(n–) .

Based on Definition , Definition , () and () we get the following lemmas which arenecessary for our main results.

Lemma Suppose X ∈ Rn×n, then(i)

X ∈ SRn×n ⇐⇒ X = KS ◦ vecS(X), ()

(ii)

X ∈ ASRn×n ⇐⇒ X = KA ◦ vecA(X). ()

Page 7: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 7 of 13

Lemma Suppose X = Re X +√

– Im X ∈ Cn×n, then

X ∈ HCn×n ⇐⇒ X = KS ◦ vecS(Re X) +√

–KA ◦ vecA(Im X). ()

Lemma Given A ∈ Cm×n, B ∈ Cn×s, C ∈ Cp×n, and D ∈ Cn×q, and X = Re X +√

– Im X ∈HCn×n. The complex matrices Fij ∈ Cm×s, Gij ∈ Cm×s, Hij ∈ Cp×q and Kij ∈ Cp×q (i, j =, , . . . , n;i ≥ j) are defined by

Fij =

⎧⎨⎩

AiBj, i = j,

AiBj + AjBi, i > j,Gij =

⎧⎨⎩

, i = j,√

–(AiBj – AjBi), i > j,

Hij =

⎧⎨⎩

CiDj, i = j,

CiDj + CjDi, i > j,Kij =

⎧⎨⎩

, i = j,√

–(CiDj – CjDi), i > j,

where Ai ∈ Cm, Ci ∈ Cp is the ith column vector of matrix A and C, meanwhile, Bj ∈ Cs,Dj ∈ Cq is the jth row vector of matrix B and D, respectively. Then

[AXB, CXD] =

⎡⎣(F, F, . . . , Fn, F, F, . . . , Fn, . . . , F(n–)(n–), Fn(n–), Fnn,

G, G, . . . , Gn, G, . . . , Gn, . . . , Gn(n–)) ◦⎡⎣ vecS(Re X)

vecA(Im X)

⎤⎦ ,

(H, H, . . . , Hn, H, H, . . . , Hn, . . . , H(n–)(n–), Hn(n–), Hnn,

K, K, . . . , Kn, K, . . . , Kn, . . . , Kn(n–)) ◦⎡⎣ vecS(Re X)

vecA(Im X)

⎤⎦

⎤⎦ .

Proof By Lemma , we can get

AXB = A[KS ◦ vecS(Re X) +

√–

(KA ◦ vecA(Im X)

)]B

=[(AKS) ◦ vecS(Re X) +

√–(AKA) ◦ vecA(Im X)

]B

=[(AKS) ◦ vecS(Re X)

]B +

√–

[(AKA) ◦ vecA(Im X)

]B

=[(

A(E, E + E, . . . , En(n–) + E(n–)n, Enn)) ◦ vecS(Re X)

]B

+√

–[A(E – E, . . . , En – En, . . . , En(n–) – E(n–)n) ◦ vecA(Im X)

]B

=(AEB, A(E + E)B, . . . , A(En(n–) + E(n–)n)B, AEnnB

) ◦ vecS(Re X)

+√

–(A(E – E)B, . . . , A(En – En)B, . . . , A(En(n–) – E(n–)n)B

) ◦ vecA(Im X)

= (AB, AB + AB, . . . , AnBn– + An–Bn, AnBn) ◦ vecS(Re X)

+√

–(AB – AB, . . . , AnB – ABn, . . . , AnBn– – An–Bn) ◦ vecA(Im X)

= (F, F, . . . , Fn(n–), Fnn) ◦ vecS(Re X) + (G, G, . . . , Gn(n–)) ◦ vecA(Im X)

= (F, F, . . . , F(n–)(n–), Fn(n–), Fnn, G, G, . . . , Gn(n–)) ◦⎡⎣ vecS(Re X)

vecA(Im X)

⎤⎦ .

Page 8: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 8 of 13

Similarly, we have

CXD = (H, H, . . . , H(n–)(n–), Hn(n–), Hnn, K, K, . . . , Kn(n–)) ◦⎡⎣ vecS(Re X)

vecA(Im X)

⎤⎦ .

Thus we can get the structure of [AXB, CXD] and complete the proof. �

4 The solution of Problem 1Based on the above results, in this section, we will deduce the solution of Problem . From[], the least-squares problem

‖AXB – E‖ + ‖CXD – F‖ = min

with respect to the Hermitian matrix X is equivalent to

∥∥∥∥∥∥∥∥∥∥∥∥

⎡⎣ P

P

⎤⎦

⎡⎢⎢⎢⎢⎢⎢⎣

vecS(Re X)

vecA(Im X)

vecS(Re X)

vecA(Im X)

⎤⎥⎥⎥⎥⎥⎥⎦

⎡⎢⎢⎢⎢⎢⎢⎣

vec(Re E)

vec(Im E)

vec(Re F)

vec(Im F)

⎤⎥⎥⎥⎥⎥⎥⎦

∥∥∥∥∥∥∥∥∥∥∥∥

= min, ()

where

P =

⎡⎢⎢⎢⎢⎢⎢⎣

(BT ⊗ A)LS√–(BT ⊗ A)LA

(DT ⊗ C)LS√–(DT ⊗ C)LA

⎤⎥⎥⎥⎥⎥⎥⎦

, P = Re(P), P = Im(P).

It should be noticed that () is an unconstrained problem over the real field and caneasily be solved by existing methods, therefore, the original complex constrained Problem is translated into an equivalent real unconstrained problem (). Since the process hasbeen expressed in [], we omit it here. Based on Theorems , , and Lemma , we nowturn to the least-squares Hermitian problem for the matrix equation (). The followinglemmas are necessary for our main results.

Lemma ([]) Given A ∈ Rm×n and b ∈ Rn, the solution of equation Ax = b involves twocases:

(i) The equation has a solution x ∈ Rn and the general solution can be formulated as

x = A+b +(I – A+A

)y ()

if and only if AA+b = b, where y ∈ Rn is an arbitrary vector.(ii) The least-squares solutions of the equation has the same formulation as () and the

least-squares solution with the minimum norm is x = A+b.

Page 9: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 9 of 13

For convenience, we introduce the following notations and lemmas.Given A ∈ Cm×n, B ∈ Cn×s, C ∈ Cp×n, D ∈ Cn×q, E ∈ Cm×s, and F ∈ Cp×q, let

�ij =

⎡⎢⎢⎢⎢⎢⎣

Re(Fij)

Im(Fij)Re(Hij)

Im(Hij)

⎤⎥⎥⎥⎥⎥⎦

, ϒij =

⎡⎢⎢⎢⎢⎢⎣

Re(Gij)

Im(Gij)Re(Kij)

Im(Kij)

⎤⎥⎥⎥⎥⎥⎦

, � =

⎡⎢⎢⎢⎢⎢⎣

Re(E)

Im(E)Re(F)

Im(F)

⎤⎥⎥⎥⎥⎥⎦

,

W =

⎡⎣ P U

UT V

⎤⎦ , e =

⎡⎣ e

e

⎤⎦ , ()

where n ≥ i ≥ j ≥ ,

P =

⎡⎢⎢⎢⎢⎢⎢⎣

〈�, �〉 〈�, �〉 · · · 〈�, �nn〉〈�, �〉 〈�, �〉 · · · 〈�, �nn〉

......

...

〈�nn, �〉 〈�nn, �〉 · · · 〈�nn, �nn〉

⎤⎥⎥⎥⎥⎥⎥⎦

,

U =

⎡⎢⎢⎢⎢⎢⎢⎣

〈�, ϒ〉 〈�, ϒ〉 · · · 〈�, ϒn(n–)〉〈�, ϒ〉 〈�, ϒ〉 · · · 〈�, ϒn(n–))

......

...

〈�nn, ϒ〉 〈�nn, ϒ〉 · · · 〈�nn, ϒn(n–)〉

⎤⎥⎥⎥⎥⎥⎥⎦

,

V =

⎡⎢⎢⎢⎢⎢⎢⎣

〈ϒ, ϒ〉 〈ϒ, ϒ〉 · · · 〈ϒ, ϒn(n–)〉〈ϒ, ϒ〉 〈ϒ, ϒ〉 · · · 〈ϒ, ϒn(n–)〉

......

...

〈ϒn(n–), ϒ〉 〈ϒn(n–), ϒ〉 · · · 〈ϒn(n–), ϒn(n–)〉

⎤⎥⎥⎥⎥⎥⎥⎦

,

e =

⎡⎢⎢⎢⎢⎢⎢⎣

〈�,�〉〈�,�〉

...

〈�nn,�〉

⎤⎥⎥⎥⎥⎥⎥⎦

, e =

⎡⎢⎢⎢⎢⎢⎢⎣

〈ϒ,�〉〈ϒ,�〉

...

〈ϒn(n–),�〉

⎤⎥⎥⎥⎥⎥⎥⎦

.

Theorem Given A ∈ Cm×n, B ∈ Cn×s, C ∈ Cm×n, D ∈ Cn×t , E ∈ Cm×s, and F ∈ Cm×t , letW , e be as in (). Then

HL ={

X|X = (KS,√

–KA) ◦ [W +e +

(I – W +W

)y]}

, ()

where y ∈ Rn is an arbitrary vector. Problem has a unique solution XH ∈ HL. This solutionsatisfies

XH = (KS,√

–KA) ◦ (W +e

). ()

Page 10: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 10 of 13

Proof By Lemma and Theorem , the least-squares problem

‖AXB – E‖ + ‖CXD – F‖ = min

with respect to the Hermitian matrix X can be translated into an equivalent consistentlinear equations over the real field

W

[vecS(Re X)

vecA(Im X)

]= e.

It follows by Lemma that

[vecS(Re X)

vecA(Im X)

]= W +e +

(I – W +W

)y.

Thus

X = (KS,√

–KA) ◦ (W +e +

(I – W +W

)y).

The proof is completed. �

We now turn to the consistency of the matrix equation (). Denote

N =

⎡⎢⎢⎢⎢⎢⎣

N

N

N

N

⎤⎥⎥⎥⎥⎥⎦

, e =

⎡⎢⎢⎢⎢⎢⎣

vec(Re E)

vec(Im E)

vec(Re F)

vec(Im F)

⎤⎥⎥⎥⎥⎥⎦

, ()

where

N =[vec

(Re(F)

), vec

(Re(F)

), . . . , vec

(Re(Fn)

), vec

(Re(F)

), . . . ,

vec(Re(Fn)

), . . . , vec

(Re(F(n–)(n–))

), vec

(Re(Fn(n–))

), vec

(Re(Fnn)

),

vec(Re(G)

), . . . , vec

(Re(Gn)

), vec

(Re(G)

), . . . ,

vec(Re(Gn)

), . . . , vec

(Re(Gn(n–))

)],

N =[vec

(Im(F)

), vec

(Im(F)

), . . . , vec

(Im(Fn)

), vec

(Im(F)

), . . . ,

vec(Im(Fn)

), . . . , vec

(Im(F(n–)(n–))

), vec

(Im(Fn(n–))

), vec

(Im(Fnn)

),

vec(Im(G)

), . . . , vec

(Im(Gn)

), vec

(Im(G)

), . . . ,

vec(Im(Gn)

), . . . , vec

(Im(Gn(n–))

)],

N =[vec

(Re(H)

), vec

(Re(H)

), . . . , vec

(Re(Hn)

), vec

(Re(H)

), . . . ,

vec(Re(Hn)

), . . . , vec

(Re(H(n–)(n–))

), vec

(Re(Hn(n–))

), vec

(Re(Hnn)

),

vec(Re(K)

), . . . , vec

(Re(Kn)

), vec

(Re(K)

), . . . ,

vec(Re(Kn)

), . . . , vec

(Re(Kn(n–))

)],

Page 11: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 11 of 13

N =[vec

(Im(H)

), vec

(Im(H)

), . . . , vec

(Im(Hn)

), vec

(Im(H)

), . . . ,

vec(Im(Hn)

), . . . , vec

(Im(H(n–)(n–))

), vec

(Im(Hn(n–))

), vec

(Im(Hnn)

),

vec(Im(K)

), . . . , vec

(Im(Kn)

), vec

(Im(K)

), . . . ,

vec(Im(Kn)

), . . . , vec

(Im(Kn(n–))

)].

By Lemma , we have

(AXB, CXD) = (E, F) ⇐⇒ N

⎡⎣ vecS(Re X)

vecA(Im X)

⎤⎦ = e. ()

Thus we can get the following conclusions by Lemma and Theorem .

Corollary The matrix equation () has a solution X ∈ HCn×n if and only if

NN+e = e. ()

In this case, denote by HE the solution set of (). Then

HE ={

X|X = (KS,√

–KA) ◦ [M+e +

(I – W +W

)y]}

, ()

where y ∈ Rn is an arbitrary vector.Furthermore, if () holds, then the matrix equation () has a unique solution X ∈ HE if

and only if

rank(N) = n. ()

In this case,

HE ={

X|X = (KS,√

–KA) ◦ (W +e

)}. ()

The least norm problem

‖XH‖ = minX∈HE

‖X‖

has a unique solution XH ∈ HE and XH can be expressed as ().

5 Numerical experimentsIn this section, based on the results in the above sections, we first give the numerical al-gorithm to find the solution of Problem . Then numerical experiments are proposed todemonstrate the efficiency of the algorithm. The following algorithm provides the mainsteps to find the solutions of Problem .

Algorithm () Input A ∈ Cm×n, B ∈ Cn×s, C ∈ Cm×n, D ∈ Cn×t , E ∈ Cm×s, and F ∈ Cm×t .() Compute Fij, Gi,j , Hi,j and Ki,j (i, j = , , . . . , n, i ≥ g) by Lemma .

Page 12: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 12 of 13

() Compute P, U , V and e according to ().() If () and () hold then calculate XH (XH ∈ HE) according to ().() If () holds then calculate XH (XH ∈ HE) according to (), otherwise go to the

next step.() Calculate XH (XH ∈ HL) according to ().

For convenience, in the following examples, the random matrix, the Hilbert matrix, theToeplitz matrix, the matrix whose all elements are one and the magic matrix are all de-noted as by the corresponding Matlab function.

Example Let Ar = rand(, ), Br = rand(, ), Cr = rand(, ), Dr = rand(, ),Ai = rand(, ), Bi = rand(, ), Ci = rand(, ), Di = rand(, ). Let X =rand(, ), Xr = X + XT ; X = hilb(), Xi = X – XT ; A = Ar +

√–Ai, B = Br +

√–Bi, C =

Cr +√

–Ci, D = Dr +√

–Di, X = Xr +√

–Xi, E = AXB, F = CXD. By using Matlab andAlgorithm , we obtain rank(W ) = , ‖W‖ = .e+, ‖WW +e – e‖ = .e–.rank(N) = , ‖N‖ = .e+, ‖NN+e – e‖ = .e–. From Algorithm (), it canbe concluded that the complex matrix equation [AXB, CXD] = [E, F] is consistent, and ithas a unique solution XH ∈ HE ; further, ‖XH – X‖ = .e– can easily be tested.

Example Let m = , n = , s = . Take Ar = [toeplitz( : ), m×], Br = ones(, ), Cr =[hilb(), ×], Dr = ones(, ), Ai = ×, Bi = ones(, ), Ci = ×, Di = ×. LetX = rand(, ), Xr = X + XT ; X = hilb(n), Xi = X – XT ; A = Ar +

√–Ai, B = Br +

√–Bi, C =

Cr +√

–Ci, D = Dr +√

–Di, X = Xr +√

–Xi, E = AXB, F = CXD. From Algorithm (),we can obtain rank(W ) = , ‖W‖ = .e+, ‖WW +e – e‖ = .e–. rank(N) =, ‖N‖ = ., ‖NN+e – e‖ = .e–. From Algorithm , it can be concludedthat the complex matrix equation [AXB, CXD] = [E, F] is consistent, and it has a uniquesolution XH ∈ HE , further, ‖XH – X‖ = . can easily be tested.

Example Let m = , n = , s = . Take Ar = ×, Br = rand(, ), Cr = ×, Dr =rand(, ), Ai = rand(, ), Bi = ×, Ci = rand(, ), Di = ×. Let X = rand(, ),Xr = X +XT ; X = rand(, ), Xi = X –XT ; A = Ar +

√–Ai, B = Br +

√–Bi, C = Cr +

√–Ci,

D = Dr +√

–Di, X = Xr +√

–Xi, E = AXB+ ones(m, s), F = CXD+ ones(m, s). By usingMatlab and Algorithm , we obtain rank(W ) = , ‖W‖ = .e+, ‖WW +e – e‖ =.e–. rank(N) = , ‖N‖ = ., ‖NN+e – e‖ = .. According to Algo-rithm () we can see that the complex matrix equation [AXB, CXD] = [E, F] is inconsis-tent, and it has a unique solution XH ∈ HE and we can get ‖XH – X‖ = ..

6 ConclusionsIn this paper, we derive the explicit expressions of least-squares Hermitian solution withthe minimum norm for the complex matrix equations (AXB, CXD) = (E, F). A numericalalgorithm and examples show the effectiveness of our method.

Competing interestsThe authors declare that they have no competing interests.

Authors’ contributionsBoth authors read and approved the final manuscript.

Page 13: Least-squaresHermitianproblemof complexmatrixequation AXB ... · Wangetal. JournalofInequalitiesandApplications20162016:296 DOI10.1186/s13660-016-1231-9 RESEARCH OpenAccess Least-squaresHermitianproblemof

Wang et al. Journal of Inequalities and Applications (2016) 2016:296 Page 13 of 13

AcknowledgementsThis work is supported by the National Natural Science Foundation (No.11271040, 11361027), the Key Project ofDepartment of Education of Guangdong Province (No.2014KZDXM055), the Training plan for the Outstanding YoungTeachers in Higher Education of Guangdong (No.SYq2014002), Guangdong Natural Science Fund of China(No.2014A030313625, 2015A030313646, 2016A030313005), Opening Project of Guangdong Province Key Laboratory ofComputational Science at the Sun Yat-sen University (No.2016007), Science Foundation for Youth Teachers of WuyiUniversity (No.2015zk09) and Science and technology project of Jiangmen City, China.

Received: 20 May 2016 Accepted: 8 November 2016

References1. Datta, BN: Numerical Methods for Linear Control Systems: Design and Analysis. Academic Press, San Diego (2004)2. Dehghan, M, Hajarian, M: An efficient algorithm for solving general coupled matrix equations and its application.

Math. Comput. Model. 51, 1118-1134 (2010)3. Dai, H, Lancaster, P: Linear matrix equations from an inverse problem of vibration theory. Linear Algebra Appl. 246,

31-47 (1996)4. Sanches, JM, Nascimento, JC, Marques, JS: Medical image noise reduction using the Sylvester-Lyapunov equation.

IEEE Trans. Image Process. 17(9), 1522-1539 (2008)5. Calvetti, D, Reichel, L: Application of ADI iterative methods to the restoration of noisy images. SIAM J. Matrix Anal.

Appl. 17, 165-186 (1996)6. Dehghan, M, Hajarian, M: The generalized Sylvester matrix equations over the generalized bisymmetric and

skew-symmetric matrices. Int. J. Syst. Sci. 43, 1580-1590 (2012)7. Liu, Y-H: On the best approximation problem of quaternion matrices. J. Math. Study 37(2), 129-134 (2004)8. Krishnaswamy, D: The skew-symmetric ortho-symmetric solutions of the matrix equations A∗XA = D. Int. J. Algebra 5,

1489-1504 (2011)9. Wang, Q-W, He, Z-H: Solvability conditions and general solution for mixed Sylvester equations. Automatica 49,

2713-2719 (2013)10. Xiao, QF: The Hermitian R-symmetric solutions of the matrix equation AXA∗ = B. Int. J. Algebra 6, 903-911 (2012)11. Yuan, Y-X: The optimal solution of linear matrix equation by matrix decompositions. Math. Numer. Sin. 24, 165-176

(2002)12. Wang, QW, Wu, ZC: On the Hermitian solutions to a system of adjointable operator equations. Linear Algebra Appl.

437, 1854-1891 (2012)13. Kyrchei, I: Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix

equations. Linear Algebra Appl. 438, 136-152 (2013)14. Kyrchei, I: Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations. Appl.

Math. Comput. 218, 6375-6384 (2012)15. Dehghani-Madiseh, M, Dehghan, M: Generalized solution sets of the interval generalized Sylvester matrix equation

and some approaches for inner and outer estimations. Comput. Math. Appl. 68(12), 1758-1774 (2014)16. Liao, A-P, Lei, Y: Least squares solution with the minimum-norm for the matrix equation (AXB,GXH) = (C,D). Comput.

Math. Appl. 50, 539-549 (2005)17. Yuan, SF, Liao, AP, Lei, Y: Least squares Hermitian solution of the matrix equation (AXB,CXD) = (E, F) with the least

norm over the skew field of quaternions. Math. Comput. Model. 48, 91-100 (2008)18. Yuan, SF, Liao, AP: Least squares Hermitian solution of the complex matrix equation AXB + CXD = E with the least

norm. J. Franklin Inst. 351, 4978-4997 (2014)19. Ben-Israel, A, Greville, TNE: Generalized Inverses: Theory and Applications. Wiley, New York (1974)