140
Free probability and combinatorics Preliminary version Michael Anshelevich c 2012 July 17, 2018

Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Free probability and combinatoricsPreliminary version

Michael Anshelevich c©2012

July 17, 2018

Page 2: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Preface

These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity in the fall of 2012. The course was based on Nica and Speicher’s textbook [NS06], however therewere sufficient differences to warrant separate notes. I tried to minimize the list of references, but manymore are included in the body of the text. Thanks to the students in the course, and to March Boedihardjo,for numerous comments and corrections; further corrections or comments are welcome. Eventually I willprobably replace hand-drawn figures by typeset ones.

Page 3: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Contents

1 Noncommutative probability spaces and distributions. 3

1.1 Non-commutative probability spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Distributions. Weak law of large numbers. . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Functional analysis background. 9

2.1 C∗- and von Neumann algebras. Spectral theorem. . . . . . . . . . . . . . . . . . . . . . 9

2.2 Gelfand-Naimark-Segal construction. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3 Other topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Free independence. 19

3.1 Independence. Free independence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Free products. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4 Set partition lattices. 29

4.1 All, non-crossing, interval partitions. Enumeration. . . . . . . . . . . . . . . . . . . . . 29

4.2 Incidence algebra. Multiplicative functions. . . . . . . . . . . . . . . . . . . . . . . . . 33

5 Free cumulant machinery. 38

5.1 Free cumulants. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.2 Free independence and free cumulants. . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.3 Free convolution, distributions, and limit theorems. . . . . . . . . . . . . . . . . . . . . 49

1

Page 4: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

6 Lattice paths and Fock space models. 54

6.1 Dyck paths and the full Fock space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

6.2 Łukasiewicz paths and Voiculescu’s operator model. . . . . . . . . . . . . . . . . . . . 62

6.3 Motzkin paths and Schurmann’s operator model. Freely infinitely divisible distributions. 63

6.4 Motzkin paths and orthogonal polynomials. . . . . . . . . . . . . . . . . . . . . . . . . 69

6.5 q-deformed free probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7 Free Levy processes. 77

8 Free multiplicative convolution and the R-transform. 81

9 Belinschi-Nica evolution. 85

10 ∗-distribution of a non-self-adjoint element 93

11 Combinatorics of random matrices. 97

11.1 Gaussian random matrices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

11.2 Map enumeration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

11.3 Partitions and permutations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

11.4 Asymptotic free independence for Gaussian random matrices. . . . . . . . . . . . . . . 112

11.5 Asymptotic free independence for unitary random matrices. . . . . . . . . . . . . . . . . 115

12 Operator-valued free probability. 121

13 A very brief survey of analytic results. 128

13.1 Complex analytic methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

13.2 Free entropy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

13.3 Operator algebras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

2

Page 5: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 1

Noncommutative probability spaces anddistributions.

See Lectures 1, 4, 8 of [NS06].

1.1 Non-commutative probability spaces.

Definition 1.1. An (algebraic) (non-commutative) probability space is a pair (A, ϕ), where A is a unital∗-algebra and ϕ is a state on A. That is:

• A is an algebra over C, with operations za+ wb, ab for z, w ∈ C, a, b ∈ A. Unital: 1 ∈ A.

• ∗ is an anti-linear involution, (za)∗ = za∗, (ab)∗ = b∗a∗.

• ϕ : A → C is a linear functional. Self-adjoint: ϕ [a∗] = ϕ [a]. Unital: ϕ [1] = 1. Positive:ϕ [a∗a] ≥ 0.

Definition 1.2. a ∈ A is symmetric if a = a∗. A symmetric a ∈ (A, ϕ) is a (n.c.) random variable.

Examples of commutative probability spaces.

Example 1.3. Let (X,Σ, P ) be a measure space, P a probability measure. Take

A = L∞(X,P ), f ∗ = f, E(f) =

∫X

f dP.

Then E (usually called the expectation) is a state, in particular positive and unital. Thus (A,E) is a(commutative) probability space. Note: f = f ∗ means f real-valued.

3

Page 6: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

A related construction isA = L∞−(X,P ) =

⋂p≥1

Lp(X,P ),

the space of complex-valued random variables all of whose moments are finite.

Example 1.4. Let X be a compact topological space (e.g. X = [0, 1]), and µ a Borel probability measureon X . Then for

A = C(X), ϕ [f ] =

∫X

f dµ,

(A, ϕ) is again a commutative probability space.

Example 1.5. Let A = C[x] (polynomials in x with complex coefficients), x∗ = x. Then any state ϕ onC[x] gives a commutative probability space. Does such a state always come from a measure?

Examples of non-commutative probability spaces.

Example 1.6. Let x1, x2, . . . , xd be non-commuting indeterminates. Let

A = C〈x1, x2, . . . , xd〉 = C〈x〉

be polynomials in d non-commuting variables, with the involution

x∗i = xi, (xu(1)xu(2) . . . xu(n))∗ = xu(n) . . . xu(2)xu(1).

Then any state ϕ on C〈x〉 gives a non-commutative probability space. These do not come from measures.One example: for z = (z1, z2, . . . , zd) ∈ Rd,

δz(f(x1, x2, . . . , xd)) = f(z1, z2, . . . , zd)

is a state (check!). Other examples?

Example 1.7. A = Mn(C), the n× n matrices over C, with the involution (A∗)i,j = Aji and

ϕ [A] =1

N

n∑i=1

Aii =1

NTr(A) = tr(A),

the normalized trace of A. Note that indeed, tr(A∗) = tr(A) and tr(A∗A) ≥ 0.

Definition 1.8. Let ϕ be a state on A.

a. ϕ is tracial, or a trace, if for any a, b ∈ A,

ϕ [ab] = ϕ [ba] .

Note that A is, in general, not commutative.

4

Page 7: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

b. ϕ is faithful if ϕ [a∗a] = 0 only if a = 0.

Example 1.9. For a probability space (X,Σ, P ), let

A = Mn(C)⊗ L∞−(X,P ) 'Mn(L∞−(X,P )).

These are random matrices = matrix-valued random variables = matrices with random entries. Take

ϕ [A] = (tr⊗E)(A) =

∫X

tr(A) dP.

Example 1.10. Let H be a Hilbert space, and A a ∗-subalgebra of B(H), the algebra of bounded linearoperators onH. a∗ is the adjoint operator to a. If ξ ∈ H is a unit vector, then

ϕ [a] = 〈aξ, ξ〉

is a state. Why unital:ϕ [1] = 〈ξ, ξ〉 = ‖ξ‖2 = 1.

Why self-adjoint:ϕ [a∗] = 〈a∗ξ, ξ〉 = 〈ξ, aξ〉 = 〈aξ, ξ〉.

Why positive:ϕ [a∗a] = 〈a∗aξ, ξ〉 = 〈aξ, aξ〉 = ‖aξ‖2 ≥ 0.

Typically not tracial or faithful.

Example 1.11 (Group algebra). Let Γ be a discrete group (finite, Zn, Fn, etc.).

C[Γ] = functions Γ→ C of finite support= finite linear combinations of elements of Γ with C coefficients.

(f : x 7→ f(x)) ↔∑x∈Γ

f(x)x.

This is a vector space. It is an algebra with multiplication(∑x∈Γ

f(x)x

)(∑y∈Γ

g(y)y

)=∑z∈Γz=xy

(f(x)g(y))z,

in other words(fg)(z) =

∑z=xy

f(x)g(y) =∑x∈Γ

f(x)g(x−1z),

the convolution multiplication. The involution is

f ∗(x) = f(x−1).

5

Page 8: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Check that indeed, (fg)∗ = g∗f ∗.

So C[Γ] is a unital ∗-algebra, with the unit δe, where for x ∈ Γ

δx(y) =

1, y = x,

0, y 6= x

and e is the unit of Γ.

Moreover, defineτ [f ] = f(e).

Exercise 1.12. Prove that τ is a faithful, tracial state, called the von Neumann trace.

Later: other versions of group algebras.

1.2 Distributions. Weak law of large numbers.

Definition 1.13. Let (A, ϕ) be an n.c. probability space.

a. Let a ∈ (A, ϕ) be symmetric. The distribution of a is the linear functional

ϕa : C[x]→ C, ϕa [p(x)] = ϕ [p(a)] .

Note that ϕa is a state on C[x].

The sequence of numbers

mn[ϕa] = ϕa [xn] = ϕ [an] , n = 0, 1, 2, . . .

are the moments of a. In particular m0[ϕa] = 1 and m1[ϕa] = ϕ [a] is the mean of a.

b. More generally, let a1, a2, . . . , ad ∈ (A, ϕ) be symmetric. Their joint distribution is the state

ϕa1,...,ad : C〈x1, . . . , xd〉 → C, ϕa1,...,ad [p(x1, . . . , xd)] = ϕ [p(a1, . . . , ad)] .

The numbers ϕ[au(1)au(2) . . . au(n)

]: n ≥ 0, 1 ≤ u(i) ≤ d

are the joint moments of a1, . . . , ad.

Denote byD(d) the space of all joint distributions of d-tuples of symmetric random variables, whichis the space of all states on C〈x1, . . . , xd〉.

6

Page 9: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

c. We say that(a

(N)1 , . . . , a

(N)d )→ (a1, . . . , ad)

in moments (or, for d > 1, in distribution) if for each p,

ϕ(a

(N)1 ,...,a

(N)d )

[p]→ ϕ(a1,...,ad) [p]

as N →∞.

Remark 1.14. Each µ ∈ D(d) can be realized as a distribution of a d-tuple of random variables, namely(x1, x2, . . . , xd) ⊂ (C〈x〉, µ).

Definition 1.15. a1, a2, . . . , ad ∈ (A, ϕ) are singleton independent if

ϕ[au(1)au(2) . . . au(n)

]= 0

whenever all ai are centered (that is, ϕ [ai] = 0) and some index in ~u appears only once.

Proposition 1.16 (Weak law of large numbers). Suppose an : n ∈ N ⊂ (A, ϕ) are singleton indepen-dent, identically distributed (that is, all ϕai are the same), and uniformly bounded, in the sense that for afixed C and all ~u, ∣∣ϕ [au(1)au(2) . . . au(n)

]∣∣ ≤ Cn.

Denotesn =

1

n(a1 + a2 + . . .+ an).

Then sn → ϕ [a1] in moments (sn converges to the mean, a scalar).

Proof. Note first that

1

n

((a1 − ϕ [a1]) + . . .+ (an − ϕ [an])

)= sn − ϕ [a1] .

So without loss of generality, may assume ϕ [a1] = 0, and we need to show that sn → 0 in moments.

ϕ[skn]

=1

nkϕ[(a1 + . . .+ an)k

]=

1

nk

n∑u(1)=1

. . .

n∑u(k)=1

ϕ[au(1)au(2) . . . au(k)

].

How many non-zero terms?

Denote B(k) the number of partitions of k points into disjoint non-empty subsets. Don’t need the exactvalue, but see Chapter 4. For each partition

π = (B1, B2, . . . , Br), B1 ∪B2 ∪ . . . Br = 1, . . . , k , Bi ∩Bj = ∅ for i 6= j.

How many k-tuples (u(1), u(2), . . . , u(k)) such that

u(i) = u(j) ⇔ iπ∼ j ( i.e. i, j in the same block of π)?

7

Page 10: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

n(n− 1)(n− 2) . . . (n− r + 1) ≤ nr ≤ nk/2

because of the singleton condition. Each term in the sum bounded by∣∣ϕ [au(1)au(2) . . . au(k)

]∣∣ ≤ Ck.

Thus ∣∣ϕ [xkn]∣∣ ≤ 1

nkB(k)nk/2Ck → 0

as n→∞ for any k > 0.

Remark 1.17. For centered independent random variables, showed

sn =a1 + . . .+ an

n→ 0.

Expect a non-trivial limit fora1 + . . .+ an√

n.

What limit depends on the notion of independence used. See Lecture 8 of [NS06] for more.

8

Page 11: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 2

Functional analysis background.

See Lectures 3, 7 of [NS06].

2.1 C∗- and von Neumann algebras. Spectral theorem.

Definition 2.1. An (abstract) C∗-algebra is a Banach ∗-algebra with an extra axiom

‖a∗a‖ = ‖a‖2 .

A C∗-probability space is a pair (A, ϕ), where A is a C∗-algebra and ϕ is a state on it continuous in thenorm topology. (It follows from Corollary 2.13(c) that continuity is actually automatic).

Example 2.2. If X is a compact Hausdorff space, the algebra of continuous functions on it C(X) is a C∗-algebra (with the uniform norm). States on C(X) come from integration with respect to Borel probabilitymeasures.

Theorem 2.3 (Gelfand-Naimark theorem). Any unital, commutative C∗-algebra is of the form in thepreceding example.

Example 2.4. If A is a ∗-subalgebra of B(H) closed in the norm topology, then A is a C∗-algebra. Ifξ ∈ H is a unit vector, then ϕ = 〈·ξ, ξ〉 is a state on A.

Theorem 2.5. Any abstract C∗-algebra is a concrete C∗-algebra, that is, it has a representation as in thepreceding example.

Definition 2.6. A ∗-subalgebra A ⊂ B(H) is a von Neumann algebra (or a W ∗-algebra) ifA is closed inthe weak operator topology. A W ∗-probability space is a pair (A, ϕ), where A is a W ∗-algebra and ϕ isa normal state (continuous in the ultraweak operator topology).

9

Page 12: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 2.7. Let X be a compact Hausdorff space, and µ be a Borel probability measure on X . Then(C(X), µ) is a C∗-probability space. But C(X) ⊂ B(L2(X,µ)) is not WOT-closed. However, L∞(X,µ)is so closed, and therefore is a von Neumann algebra. In fact, the WOT on L∞(X,µ) is the weak-∗topology on L∞ = (L1)∗. Normal states on L∞(X,µ) come from integration with respect to f dµ,f ∈ L1(X,µ).

Definition 2.8. For a ∈ B(H), its spectrum is the set

σ(a) = z ∈ C : (z − a) is not invertible .

The spectrum is always a compact, non-empty subset of C. The spectral radius of a is

r = sup |z| : z ∈ σ(a) .

We always have r(a) ≤ ‖a‖.

An operator a ∈ B(H) is positive (written a ≥ 0) if a = a∗ and σ(a) ⊂ [0,∞).

Proposition 2.9. Let A be a C∗-algebra and a ∈ A.

a. ‖a‖2 = r(a∗a). Thus norm (topology) is determined by the spectrum (algebra).

b. a ≥ 0 if and only if for some b ∈ A, a = b∗b. (In fact, may take b ≥ 0 and a = b2).

Definition 2.10. In an (algebraic) n.c. probability space A, say a ≥ 0 if ∃b1, . . . , bk such that

a =k∑i=1

b∗i bi.

Remark 2.11. Note that in the algebra C[x1, x2, . . . , xd] of multivariate polynomials in commuting vari-ables, we can have p(x) ≥ 0 for all x without being able to write p as a sum of squares.

Theorem 2.12 (Spectral theorem and continuous functional calculus, bounded symmetric case). Let a =a∗ ∈ B(H). Then the C∗-algebra generated by a

C∗(a) ' C(σ(a))

(isometric C∗-isomorphism). Moreover, this defines a map f 7→ f(a) ∈ B(H), so that f(a) is a well-defined operator for any continuous f .

Corollary 2.13. Let (A, ϕ) be a C∗-probability space.

a. For any a = a∗ ∈ A, the operator ‖a‖ − a is positive.

b. For any a = a∗, b ∈ A, ϕ [b∗ab] ≤ ‖a‖ϕ [b∗b] (why real?).

10

Page 13: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

c. For any a ∈ A, |ϕ [a]| ≤ ‖a‖.

d. Assume ϕ is faithful. Then for any a ∈ A, ‖a‖ = limn→∞ (ϕ [(a∗a)n])1/2n. Thus the norm can becomputed by using moments.

Proof. For (a), using the identification in the spectral theorem, in C(σ(a)), a corresponds to the identityfunction (f(x) = x), ‖a‖ = ‖f‖u, and ‖a‖ − a corresponds to ‖f‖u − f , which is positive.

For (b), using part (a) we can write ‖a‖ − a = c∗c, from which

ϕ [b∗(‖a‖ − a)b] = ϕ [b∗c∗cb] ≥ 0.

If a is symmetric, (c) follows from (b) (with b = 1). In general, Cauchy-Schwartz inequality belowimplies

|ϕ [a]| ≤√ϕ [a∗a]ϕ [1∗1] ≤

√‖a∗a‖ = ‖a‖ .

Finally, applying the spectral theorem to a symmetric element a∗a, the last statement follows from thatfact that for a finite measure µ, limn→∞ ‖f‖2n,µ = ‖f‖∞,µ and, if µ has full support, ‖f‖∞,µ = ‖f‖u.

2.2 Gelfand-Naimark-Segal construction.

Remark 2.14 (GNS construction I.). Let (A, ϕ) be an n.c. probability space. Let V = A as a vectorspace. For a, b ∈ V , define

〈a, b〉ϕ = ϕ [b∗a] .

This is a possibly degenerate inner product, i.e. may have ‖a‖ϕ = 0 for a 6= 0 in V . In fact, inner productnon-degenerate if and only if ϕ faithful.

For a ∈ A, define λ(a) ∈ L(V ) (linear, not necessarily bounded operators) by

λ(a)b = ab.

Note that〈λ(a∗)b, c〉ϕ = ϕ [c∗a∗b] = ϕ [(ac)∗b] = 〈b, λ(a)c〉ϕ ,

so with respect to this inner produce, the adjoint of the operator λ(a) is λ(a∗). We conclude thatλ(a) : a ∈ A is a ∗-representation of A on V . Moreover,

ϕ [a] = 〈λ(a)1, 1〉ϕ .

Even if 〈·, ·〉ϕ is degenerate, we still have the Cauchy-Schwartz inequality∣∣∣〈a, b〉ϕ∣∣∣2 ≤ 〈a, a〉ϕ 〈b, b〉ϕ ,11

Page 14: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

in other words |ϕ [b∗a]|2 ≤ ϕ [a∗a]ϕ [b∗b]. Let

N =a ∈ A : ϕ [a∗a] = ‖a‖2

ϕ = 0.

If a, b ∈ N so are their linear combinations. So N is a subspace of V , on V/N , the inner product isnon-degenerate, and induces a norm ‖·‖ϕ. Let

H = (V/N )‖·‖

which is a Hilbert space. DenoteH (or perhaps V ) by L2(A, ϕ). We have a natural map a 7→ a = a+Nof A → A ⊂ L2(A, ϕ) with dense range.

Moreover, for any a ∈ A and b ∈ N

‖ab‖2ϕ = ϕ [b∗a∗ab] ≤ ‖a∗ab‖ϕ ‖b‖ϕ = 0.

So N is a left ideal in A. For a ∈ A,

λ(a)b = ab+ aN = ab.

Thus have a (not necessarily faithful) ∗-representation ofA on the Hilbert spaceH, by a priori unboundedoperators with a common dense domain A = V/N .

Corollary 2.15. Each µ ∈ D(d) can be realized as the joint distribution of (possibly unbounded) opera-tors on a Hilbert space with respect to a vector state.

Proof. µ is the joint distribution of the operators λ(x1), . . . , λ(xd) in the GNS representation of

(C〈x1, . . . , xd〉, µ)

with respect to the GNS state.

Would like to say(A,ϕ) ' (A ⊂ B(H), ϕ = 〈·ξ, ξ〉).

Implies A is a C∗-algebra! So have to assume this.

Remark 2.16 (GNS construction II.). Let (A, ϕ) be an C∗-probability space. Use the notation from thepreceding remark.

To show: each ‖λ(a)‖ <∞, in fact ‖λ(a)‖ ≤ ‖a‖. Indeed,∥∥∥λ(a)b∥∥∥2

ϕ= ‖ab+N‖2

ϕ ≤ ‖ab‖2ϕ = ϕ [b∗a∗ab] ≤ ‖a∗a‖ϕ [b∗b] = ‖a‖2

∥∥∥b∥∥∥2

ϕ.

Finally,ϕ [a] =

⟨λ(a)1, 1

⟩ϕ.

12

Page 15: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Corollary 2.17. If (A, ϕ) is a C∗-probability space and ϕ is faithful, may assumeA ⊂ B(H), ϕ = 〈·ξ, ξ〉.In addition, ξ is cyclic, that is

Aξ‖·‖ = H.Moreover, in this case (Aw, 〈·ξ, ξ〉) is a W ∗-probability space.

How is the preceding statement related to Theorem 2.5? Does it give a complete proof for it?

The following result is inspired by Proposition 1.2 in [PV10].

Proposition 2.18. Let µ be a state on C〈x1, . . . , xd〉 such that for a fixed C and all ~u,∣∣µ [xu(1)xu(2) . . . xu(n)

]∣∣ ≤ Cn.

Then µ can be realized as a joint distribution of a d-tuple of bounded operators on a Hilbert space.

Proof. The construction of the Hilbert space H is as in the GNS construction I. We need to show thatthe representation is by bounded operators. It suffices to show that each λ(xi) is bounded. Define the“non-commutative ball of radius C” to be the space of formal power series

BC〈x〉 =

∑~u

α~ux~u : α~u ∈ C,∑~u

|α~u|C |~u| <∞

.

It is easily seen to be a ∗-algebra, to which µ extends via

µ

[∑~u

α~ux~u

]=∑~u

α~uµ [x~u] .

Define gi to be the power series expansion of(

1− x2i4C2

)1/2

. This series has radius of convergence 2C,and so lies in BC〈x〉. Since

g2i = 1− 1

4C2x2i ,

for f ∈ A,

0 ≤ µ [f ∗g∗i gif ] = µ

[f ∗(

1− 1

4C2x2i

)f

]= µ [f ∗f ]− 1

4C2µ [(xif)∗(xif)] .

It follows that‖xif‖µ ≤ 2C ‖f‖µ ,

so each operator λ(xi) is bounded.

Remark 2.19. Any µ ∈ D(d) as in the previous proposition produces, via the GNS construction, a C∗-algebra and a von Neumann algebra. Thus, at least in principle, one can study von Neumann algebras bystudying such joint distributions. See also Theorem 4.11 of [NS06].

13

Page 16: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

2.3 Other topics.

Remark 2.20 (Group algebras).

L2(C[Γ], τ) = L2(Γ) =

f : Γ→ C,

∑x∈Γ

|f(x)|2 <∞

.

Γ acts on L2(Γ) on the left by(λ(y)g)(x) = (δyg)(x) = g(y−1x).

C[Γ] acts on L2(Γ) on the left by

(λ(f)g)(x) = (fg)(x) =∑y∈Γ

f(y)g(y−1x).

The reduced group C∗-algebra is

C∗r (Γ) = C[Γ]‖·‖⊂ B(L2(Γ))

(there is also a full C∗-algebra C∗(Γ)). The group von Neumann algebra is

L(Γ) = W ∗(Γ) = C[Γ]weak⊂ B(L2(Γ)).

The vector state τ = 〈·δe, δe〉 is the extension of the von Neumann trace, which is still faithful and tracialon C∗r (Γ) and L(Γ).

Remark 2.21 (The isomorphism problem). It is easy to show that

F2 6' F3

and not too hard thatC[F2] 6' C[F3].

Using K-theory, Pimsner and Voiculescu showed that

C∗r (F2) 6' C∗r (F3).

The question of whether

L(F2)?' L(F3)

is open.

Remark 2.22 (Distributions and moments). Let a be a symmetric, bounded operator. C∗(a) ' C(σ(a)),σ(a) compact. So (continuous) states on C∗(a) correspond to Borel probability measures on σ(a). In

14

Page 17: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

particular, if a is a symmetric, bounded operator in (A, ϕ), there exists a Borel probability measure on Rsuch that for any f ∈ C(σ(a)),

ϕ [f(a)] =

∫f dµa.

µa is the distribution of a (with respect to ϕ). Note that µa is supported on σ(a), so in particular (for abounded operator) compactly supported. Note also that for a polynomial p,

ϕa [p(x)] =

∫p(x) dµa(x).

By the Weierstrass theorem and continuity of ϕ, ϕa determines µa. On the other hand, if ϕ is faithful onC∗(a), for a = a∗

‖a‖ = sup |z| : z ∈ supp(µa) .

Exercise 2.23. Let A ∈ (Mn(C), tr) be a Hermitian matrix, with real eigenvalues λ1, . . . , λn. Computethe algebraic distribution ϕA and the analytic distribution µA of A with respect to tr.

Remark 2.24 (Generating functions).

mn = ϕ [an] =

∫xn dµa(x)

are the moments of a. Always take m0 = 1. For a formal indeterminate z,

M(z) =∞∑n=0

mnzn

is the (formal) moment generating function of a (or of ϕa, or of µa). If a is bounded, then in fact

M(z) =

∫R

1

1− xzdµa(x)

for z ∈ C, |z| ≤ ‖a‖−1. The (formal) Cauchy transform is

Gµ(z) =∞∑n=0

mn1

zn+1=

1

zM(1/z).

But in fact, the Cauchy transform is defined by

Gµ(z) =

∫R

1

z − xdµ(x)

for any finite measure µ on R, as an analytic map Gµ : C+ → C−.

The measure can be recovered from its Cauchy transform via the Stieltjes inversion formula

dµ(x) = − 1

πlimy→0+

=Gµ(x+ iy) dx.

15

Page 18: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 2.25. Let Γ = F1 = Z, with a single generator x. In (C[Z], τ), have the generating elementu = δx. Note that

u∗ = δx−1 = u−1,

so u is a unitary. Moreover,τ [un] = δn=0, n ∈ Z,

which by definition says that u is a Haar unitary.

What is the distribution of the symmetric operator u+ u∗? Moments:

τ[(u+ u−1)n

]=?

Number of walks with n steps, starting and ending at zero, with steps of length 1 to the right or to the left.

τ[(u+ u−1)2n+1

]= 0, τ

[(u+ u−1)2n

]=

(2n

n

).

Cauchy transform of the distribution is

G(z) =∞∑n=0

(2n

n

)z−2n−1 =

∞∑n=0

(2n)!

n!n!z−2n−1 =

∞∑n=0

2n1 · 3 · . . . · (2n− 1)

n!z−2n−1

=∞∑n=0

4n

n!

(1

2

)(3

2

). . .

(2n− 1

2

)z−2n−1

=∞∑n=0

(−4)n

n!

(−1

2

)(−3

2

). . .

(−2n− 1

2

)z−2n−1 = z−1

(1− 4z−2

)−1/2=

1√z2 − 4

.

Therefore the distribution is

dµ(x) = − 1

πlimy→0+

= 1√(x+ iy)2 − 4

dx =1

π

1√4− x2

dx.

This is the arcsine distribution. We will see this example again.

Exercise 2.26. Verify Corollary 2.13(d) for the operator u+u∗ in the preceding example. You might wantto use Remark 2.22,

Unbounded operators.

Remark 2.27 (Commutative setting). Let (X,Σ, P ) be a probability space.

A = L∞(X,µ).

A are all the measurable functions, which form an algebra. Moreover,

f ∈ A ⇔ ∀g ∈ Cb(C,C), g f ∈ A.

16

Page 19: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 2.28. On a Hilbert space H, an unbounded operator a is defined only on a (dense) subspaceD(a). Since may have D(a) ∩ D(b) = 0, cannot in general define a+ b.

For a von Neumann algebra A, an unbounded, self-adjoint operator a is affiliated to A, if f(a) ∈ A forall bounded continuous f . A general operator T is affiliated to A, T ∈ A, if in its polar decompositionT = ua, the partial isometry u ∈ A and the positive operator a is affiliated to A.

If (A, ϕ) is a W ∗-probability space with ϕ a faithful, tracial state (so that A is a finite von Neumannalgebra), then A form an algebra.

Moment problem.

If a is an unbounded, self-adjoint operator, there exists a probability measure µa on R, not necessarilycompactly supported, such that

ϕ [f(a)] =

∫Rf dµa

for f ∈ Cb(R). Moments ϕ [an] =∫R x

n dµa(x) need not be finite, so ϕa may be undefined. Also, canhave non-uniqueness in the moment problem: µ 6= ν with∫

Rp(x) dµ =

∫p(x) dν ∀p ∈ C[x].

Definition 2.29. ak → a in distribution if µak → µa weakly, that is∫f dµak →

∫f dµ ∀f ∈ Cb(R).

Proposition 2.30. If all µak , µa compactly supported (in particular, if ak, a are bounded), then

ak → a in moments ⇔ ak → a in distribution.

More generally, if ak → a in moments and µa is determined by its moments, then ak → a is distribution.

Tensor products.

Remark 2.31 (Vector spaces). Let V1, . . . , Vn be vector spaces. Their algebraic tensor product is

V1 ⊗ V2 ⊗ . . .⊗ Vn =

k∑i=1

a(i)1 ⊗ . . .⊗ a(i)

n , k ≥ 0, a(i)j ∈ Vj

/linearity relations

a1 ⊗ . . .⊗ (za+ wb)⊗ . . .⊗ an = za1 ⊗ . . .⊗ a⊗ . . .⊗ an + wa1 ⊗ . . .⊗ b⊗ . . .⊗ an.Thus for example a ⊗ b + a ⊗ c = a ⊗ (b + c), but a ⊗ b + c ⊗ d cannot in general be simplified. Notealso that

a⊗ b 6= b⊗ a.

17

Page 20: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 2.32 (Algebras). Let A1, . . . ,An be algebras. Their algebraic tensor product A1⊗ . . .⊗An hasthe vector space structure from the preceding remark and algebra structure

(a1 ⊗ . . .⊗ an)(b1 ⊗ . . .⊗ bn) = a1b1 ⊗ a2b2 ⊗ . . .⊗ anbn.

If the algebras are unital, we have natural embeddings

Ai →n⊗j=1

Aj

viaai 7→ 1⊗ . . .⊗ ai ⊗ . . .⊗ 1.

Note that the images of Ai, Aj commute:

(a⊗ 1)(1⊗ b) = a⊗ b = (1⊗ b)(a⊗ 1).

This is a universal object for this property.

Also have tensor products of Hilbert spaces, C∗, von Neumann algebras. Require the taking of closures,which in some cases are not unique.

Remark 2.33 (Probability spaces). Let (A1, ϕ1), . . . , (An, ϕn) be n.c. probability spaces. The tensorproduct state

ϕ =n⊗i=1

ϕi

on⊗n

i=1Ai is defined via the linear extension of

ϕ [a1 ⊗ . . .⊗ an] = ϕ1[a1]ϕ2[a2] . . . ϕn[an].

18

Page 21: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 3

Free independence.

See Lectures 5, 6 of [NS06].

3.1 Independence. Free independence.

Definition 3.1. Subalgebras A1,A2, . . . ,An ⊂ (A, ϕ) are independent (with respect to ϕ) if they com-mute and for any ai ∈ Ai, 1 ≤ i ≤ n

ϕ [a1a2 . . . an] = ϕ [a1]ϕ [a2] . . . ϕ [an] .

Elements a1, a2, . . . , an are independent if the ∗-subalgebras they generate are independent.

Note that we do not assume that each Ai is itself commutative.

Example 3.2. If

(A, ϕ) =n⊗i=1

(Ai, ϕi),

then A1, . . . ,An considered as subalgebras of A are independent. For this reason, sometimes call inde-pendence tensor independence.

Remark 3.3. If a1, . . . , an are independent (and in particular commute), their individual distributionsϕa1 , . . . , ϕan completely determine their joint distribution ϕa1,...,an . Indeed, commutativity implies that allthe joint moments can be brought into the form

ϕ[au(1)1 a

u(2)2 . . . au(n)

n

]= ϕ

[au(1)1

]ϕ[au(2)2

]. . . ϕ

[au(n)n

].

This is not true for singleton independence: if ϕ [a1] = ϕ [a2] = 0, then

ϕ [a1a2a1] = 0,

19

Page 22: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

butϕ [a1a2a1a2]

is not determined.

Want an independence-type rule for joint distributions of non-commuting variables.

Definition 3.4 (Voiculescu). Let (A, ϕ) be an n.c. probability space.

a. SubalgebrasA1,A2, . . . ,Ak ⊂ (A, ϕ) are freely independent (or free) with respect to ϕ if whenever

u(1) 6= u(2), u(2) 6= u(3), u(3) 6= u(4), . . . ,

ai ∈ Au(i), ϕ [ai] = 0, thenϕ [a1a2 . . . an] = 0.

b. Elements a1, a2, . . . , ak are freely independent if the ∗-subalgebras they generate are freely inde-pendent.

Proposition 3.5. Let Fn be the free group with generators x1, x2, . . . , xn, and consider the n.c. prob-ability space (C[Fn], τ). Then with respect to τ , the Haar unitaries λ(x1), λ(x2), . . . , λ(xn) are freelyindependent.

Proof. We want to show that

τ

[n∏j=1

Pj

(λ(xu(j)), λ(x−1

u(j)))]

= 0

wheneverτ[Pj(λ(x), λ(x−1))

]= 0 (3.1)

andu(1) 6= u(2) 6= u(3) 6= . . . 6= u(n). (3.2)

Note first thatPj(λ(x), λ(x−1)) =

∑k∈Z

α(j)k λ(xk),

and so equation (3.1) implies that each α(j)0 = 0. Moreover,

n∏j=1

Pj

(λ(xu(j)), λ(x−1

u(j)))

=∑~k

α(1)k(1) . . . α

(n)k(n)λ(x

k(1)u(1) . . . x

k(n)u(n)),

where all k(i) 6= 0. But then by (3.2), the word

xk(1)u(1) . . . x

k(n)u(n)

is reduced, has no cancellations, and in particular never equals e. It follows that τ applied to any of theterms in the sum is zero.

20

Page 23: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 3.6. Pairwise free does not imply free. For example, in (C[F2], τ), let

a = λ(x1), b = λ(x2), c = λ(x1x2) = ab.

Then the elements in each pair a, b, a, c, b, c are free, but the triple a, b, c is not free.

Remark 3.7. If A1, . . . ,An ⊂ (A, ϕ) ⊂ B(H) are star-subalgebras which are free, thenC∗(A1), . . . , C∗(An) are free, and W ∗(A1), . . . ,W ∗(An) are free.

Claim 3.8. Suppose know ϕa1 , . . . , ϕan and a1, . . . , an are free. Then ϕa1,...,an is determined.

Example 3.9. Let a, b ∈ (A, ϕ) be free. How to compute ϕ [abab]?

Write a = a− ϕ [a]. Note

ϕ[(a)2

]= ϕ

[a2 − 2aϕ [a] + ϕ [a]2

]= ϕ

[a2]− ϕ [a]2 .

Thenϕ [abab] = ϕ [(a + ϕ [a])(b + ϕ [b])(a + ϕ [a])(b + ϕ [b])] .

Using freeness and linearity, this reduces to

ϕ [abab] = ϕ [a]ϕ [b]ϕ [a]ϕ [b] + ϕ [a]2 ϕ[(b)2

]+ ϕ [b]2 ϕ

[(a)2

]= ϕ [a]ϕ [b]ϕ [a]ϕ [b] + ϕ [a]2 (ϕ

[b2]− ϕ [b]2) + ϕ [b]2 (ϕ

[a2]− ϕ [a]2)

= ϕ[a2]ϕ [b]2 + ϕ [a]2 ϕ

[b2]− ϕ [a]2 ϕ [b]2 .

Moral: not a good way to compute.

Proof of Claim. Inside A, the ∗-algebra generated by a1, . . . , an is

Alg∗ (a1, a2, . . . , an)

= C⊕ Span

∞⋃k=1

⋃u(1)6=u(2)6=... 6=u(k)

b1b2 . . . bk : bi ∈ Alg∗

(au(i)

), ϕ [bi] = 0

.

ϕ is zero on the second component, and is determined by ϕ [1] = 1 on the first.

Exercise 3.10. Exercise 5.25 in [NS06]. In this exercise we prove that free independence behaves wellunder successive decompositions and thus is associative. Let Ai : i ∈ I be unital ∗-subalgebras of(A, ϕ), and

B(j)i : j ∈ J(i)

be unital ∗-subalgebras of Ai. Then we have the following.

a. If Ai : i ∈ I are freely independent in (A, ϕ) and for each i ∈ I ,B(j)i : j ∈ J(i)

are freely

independent in (Ai, ϕ|Ai), then allB(j)i : i ∈ I, j ∈ J(i)

are freely independent in (A, ϕ).

21

Page 24: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

b. If allB(j)i : i ∈ I, j ∈ J(i)

are freely independent in (A, ϕ) and if, for each i ∈ I , Ai is the

∗-algebra generated byB(j)i : j ∈ J(i)

, then Ai : i ∈ I are freely independent in (A, ϕ).

Exercise 3.11. Let (A, τ) be a tracial n.c. probability space, a1, a2, . . . , an free, and u a unitary. Thenu∗aiu, 1 ≤ i ≤ n are free. Is the conclusion still true if τ is not assumed to be a trace?

Exercise 3.12. Exercise 5.24 in [NS06]. Let (A, ϕ) be a n.c. probability space. Consider a unital ∗-subalgebra B ⊂ A and a Haar unitary u ∈ A freely independent from B. Show that then also B and u∗Buare free.

3.2 Free products.

Groups.

Γ1,Γ2, . . . ,Γn groups with units ei ∈ Γi.

Any word x1x2 . . . xk with all xi ∈⋃ni=1 Γi can be reduced by identifying

x1x2 . . . xi−1ejxi . . . xk = x1x2 . . . xi−1xi . . . xk

andx1x2 . . . xi−1yzxi . . . xk = x1x2 . . . xi−1(yz)xi . . . xk

if y, z ∈ Γi for the same i. In this way any word can be reduced to a unique (!) reduced word: unit e or

x1x2 . . . xk, xi ∈ Γu(i) \eu(i)

, u(1) 6= u(2) . . . 6= u(k).

Define the free product of groups

Γ = Γ1 ∗ Γ2 ∗ . . . ∗ Γn = ∗ni=1Γi =

reduced words in elements of

n⋃i=1

Γi

with unit e, inverse(x1x2 . . . xk)

−1 = x−1k . . . x−1

2 x−11

and product(x1x2 . . . xk)(y1y2 . . . yl) = reduced version of (x1 . . . xky1 . . . yl).

22

Page 25: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Reduced free product of n.c. probability spaces.

Let (A1, ϕ1), . . . , (An, ϕn) be n.c. probability spaces. Denote

Ai = a ∈ Ai : ϕi [a] = 0 ,

W∅ = C = C1, and for u(1) 6= u(2) . . . 6= u(k), define the vector space

W~u = Au(1) ⊗Au(2) ⊗ . . .⊗Au(k).

Let

A =∞⊕k=0

⊕|~u|=k

u(1)6=u(2)... 6=u(k)

W~u = C1⊕∞⊕k=1

⊕u(1)6=u(2)... 6=u(k)

W~u

By abuse of notation we will write a1 ⊗ a2 ⊗ . . .⊗ ak as a1a2 . . . ak. Define

(a1a2 . . . ak)∗ = a∗k . . . a

∗2a∗1.

On A, we define multiplication as follows. First,

z(a1a2 . . . ak) = (za1)a2 . . . ak.

Next, let ai ∈ Au(i), bi ∈ Av(i), where ~u and ~v are alternating. If u(1) 6= v(1),

(ak . . . a2a1)(b1b2 . . . bl) = ak . . . a2a1b1b2 . . . bl.

In general,

(ak . . . a2a1)(b1b2 . . . bl) = ak . . . a2(a1b1)b2 . . . bl + ϕu(1) [a1b1] ak . . . a3(a2b2)b3 . . . bl + . . .

+ ϕu(1) [a1b1]ϕu(2) [a2b2] . . . ϕu(j−1) [aj−1bj−1] ak . . . ajbj . . . bl,

where j ≤ min(k, l) is the first index (if it exists) such that u(j) 6= v(j).

On A, define the free product stateϕ = ∗ni=1ϕi

by: ϕ [1] = 1 and on an alternating word in centered elements, ϕ [a1a2 . . . ak] = 0. We will prove later ϕis positive; other properties are clear.

Remark 3.13. This is a reduced free product, since we identify the units of all Ai with the unit 1 ∈ A.Thus this is a product with amalgamation over C. There is also a full free product construction.

Remark 3.14. If we identify

Ai ↔ C⊕Ai ⊂ A, a↔ ϕ [a]⊕ (a− ϕ [a])

then ϕi = ϕ|Ai and A1, . . . ,An are freely independent with respect to ϕ.

23

Page 26: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 3.15.∗ni=1C[Γi] = C [∗ni=1Γi] .

Proposition 3.16. If each ϕi is tracial, so is their free product ϕ.

Corollary 3.17. Free product of commutative algebras is tracial. In particular, free product of one-dimensional distributions is tracial.

Proof of the Proposition. Let ai ∈ Au(i), bi ∈ Av(i), u(1) 6= u(2) 6= . . . 6= u(k), v(1) 6= v(2) 6= . . . 6=v(l). Suppose

u(1) = v(1), u(2) = v(2), . . . u(j) = v(j), u(j + 1) 6= v(j + 1).

Thenϕ [ak . . . a1b1 . . . bl] = ϕ [ak . . . a2(a1b1)b2 . . . bl] + ϕu(1) [a1b1]ϕ [ak . . . a3(a2b2)b3 . . . bl] + . . .

+ ϕu(1) [a1b1]ϕu(2) [a2b2] . . . ϕu(j−1) [aj−1bj−1]ϕ [ak . . . ajbj . . . bl]

This is zero unless j = k = l, in which case, since each ϕi is tracial,

ϕ [ak . . . a1b1 . . . bk] = ϕu(1) [a1b1]ϕu(2) [a2b2] . . . ϕu(j) [ajbj]

= ϕu(1) [b1a1]ϕu(2) [b2a2] . . . ϕu(j) [bjaj] = ϕ [b1 . . . bkak . . . a1] .

This implies that ϕ has the trace property for general ai, bi (why?).

Remark 3.18 (Reduced free product of Hilbert spaces with distinguished vectors). Given Hilbert spaceswith distinguished vectors (H1, ξ1)ni=1, define their reduced free product

(H, ξ) = ∗ni=1(Hi, ξi)

as follows. DenoteHi = Hi Cξi. Then

Halg = Cξ ⊕∞⊕k=1

⊕|~u|=k

u(1)6=u(2)... 6=u(k)

(H~u = Hu(1) ⊗Hu(2) ⊗ . . .⊗Hu(k)

).

H is the completion of Halg with respect to the inner product for which H~u ⊥ H~v for ~u 6= ~v, and on eachH~u we use the usual tensor inner product

〈f1 ⊗ f2 ⊗ . . .⊗ fk, g1 ⊗ g2 ⊗ . . .⊗ gk〉 =k∏i=1

〈fi, gi〉u(i)

for fi, gi ∈ Hu(i).

If (Hi, ξi) = L2(Ai, ϕ1), then, at least in the faithful case,

(H, ξ) = L2 (∗ni=1(Ai, ϕi)) .

More generally, if each Ai is represented on Hi with ϕi = 〈·ξi, ξi〉, then can represent ∗ni=1(Ai, ϕi) on Hso that ϕ = 〈·ξ, ξ〉. This implies in particular that ϕ is positive.

By using this representation and taking appropriate closures, can define reduced free products of C∗ andW ∗-probability spaces.

24

Page 27: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Positivity.

How to prove positivity of a joint distribution? Represent it as a joint distribution of symmetric operatorson a Hilbert space.

How to prove positivity of the free product state? One way is to use the preceding remark. We will use adifferent, more algebraic proof.

Remark 3.19. A matrix A ∈ Mn(C) is positive if and only if one of the following equivalent conditionsholds:

a. A = A∗ and σ(A) ⊂ [0,∞) (non-negative eigenvalues).

b. A = B∗B for some B (may take B = B∗).

c. for all z = (z1, z2, . . . , zn)t ∈ Cn,

〈Az, z〉 =n∑

i,j=1

ziAijzj ≥ 0.

Proposition 3.20. A linear functional ϕ on A is positive if and only if for all n and alla1, a2, . . . , an ∈ A, the (numerical) matrix [ϕ(a∗i aj)]

ni,j=1 is positive.

Proof. ⇐ clear by taking n = 1. For the converse, let a1, . . . , an ∈ A and z1, . . . zn ∈ C. Then since ϕ ispositive,

0 ≤ ϕ

[(n∑i=1

ziai

)∗ n∑j=1

zjaj

]=

n∑i,j=1

ziϕ [a∗i aj] zj.

By the preceding remark, the matrix [ϕ(a∗i aj)]ni,j=1 is positive.

Definition 3.21. Let A be a ∗-algebra. Then Mn(A) = Mn(C) ⊗ A is also a ∗-algebra, and so has anotion of positivity. If T : A → B, define

Tn : Mn(A)→Mn(B) Tn([aij]ni,j=1) = [T (aij)]

ni,j=1.

We say that T is completely positive if each Tn is positive.

Remark 3.22. Usually defined only for C∗-algebras. Even in that case, positive does not imply com-pletely positive.

Compare with Section 3.5 of [Spe98].

Proposition 3.23. If (A, ϕ) is an n.c. probability space, so that ϕ is positive, then each ϕn : Mn(A) →Mn(C) is positive, so that ϕ is completely positive.

25

Page 28: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof. Let A ∈ Mn(A) be positive. By definition, that means A =∑N

i=1B∗iBi. It suffices to show that

each ϕn [B∗iBi] is positive. So without loss of generality, assume that A = B∗B. That is

Aij =n∑k=1

b∗kibkj, buv ∈ A.

Then

[ϕn(A)]ni,j=1 =n∑k=1

[ϕ(b∗kibkj)]ni,j=1

and for each k, this matrix is positive.

Definition 3.24. For A,B,C ∈Mn(C), C is the Schur product of A and B if

Cij = AijBij.

Proposition 3.25. If A,B are positive, so is their Schur product.

Proof. Let A = D∗D, Aij =∑n

k=1DkiDkj . Then

n∑i,j=1

ziCijzj =n∑

i,j,k=1

ziDkiDkjBijzj =n∑k=1

(n∑

i,j=1

(Dkizi)Bij(Dkjzj)

)≥ 0,

since B is positive. Therefore C is positive.

Theorem 3.26. Let (Ai, ϕi)ni=1 be n.c. probability spaces and (A, ϕ) = ∗ni=1(Ai, ϕi). Then ϕ is positive.If each ϕi is faithful, so is ϕ.

Proof. Recall the representation

A =∞⊕k=1

⊕|~u|=k

u(1)6=u(2)... 6=u(k)

W~u

and ϕ [ξη] = 0 unless ξ, η ∈ W~u for the same ~u. Thus for ξ =⊕

~u ξ~u,

ϕ [ξ∗ξ] =∑~u

ϕ [ξ∗~uξ~u] .

So it suffices to show that ϕ [ξ∗ξ] ≥ 0 for ξ ∈ W~u. In its turn,

ξ =r∑j=1

a(j)1 a

(j)2 . . . a

(j)k , a

(j)i ∈ Au(i).

26

Page 29: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Then

ϕ [ξ∗ξ] = ϕ

[r∑i=1

(a

(i)k

)∗. . .(a

(i)1

)∗ r∑j=1

a(j)1 . . . a

(j)k

]

=r∑

i,j=1

ϕu(1)

[(a

(i)1 )∗a

(j)1

]. . . ϕu(k)

[(a

(i)k )∗a

(j)k

].

Each matrix[ϕu(s)

((a

(i)s )∗a

(j)s

)]ri,j=1

is positive, therefore so is their Schur product.

The proof of faithfulness is similar, see Proposition 6.14 in [NS06].

Remark 3.27 (Free convolutions). Let µ, ν be two distributions (probability measures on R). Free prod-ucts allow us to construct two symmetric, freely independent variables a, b in some tracial C∗-probabilityspace (A, ϕ) such that µ = µa, ν = µb (why?). Know the distribution of a+ b is a probability measure onR, depends only on µ, ν and not on a, b. Thus define the additive free convolution µ ν by µ ν = µa+b.How to compute?

Similarly, define the multiplicative free convolution µ ν = ϕab. Note that ab is not symmetric, so ingeneral µ ν is not positive and does not correspond to a measure on R. Also note that since ϕ is tracial, is commutative.

Suppose that µ, ν are supported in [0,∞). Then we may choose a, b to be positive. In that case, a1/2ba1/2

is also positive, and since ϕ is trace,ϕab = ϕa1/2ba1/2 .

In this case µ ν may be identified with a probability measure on [0,∞).

Now suppose instead that µ, ν are supported on the unit circle T = z ∈ C : |z| = 1. Then we maychoose a, b to be unitary. Then ab is also unitary, so in this case µ ν may be identified with a probabilitymeasure on T.

Proposition 3.28 (Compare with Exercise 3.11). Let (A, τ) be a n.c. probability space, a1, a2, . . . , anfree, and u a unitary free from them. Then u∗aiu, 1 ≤ i ≤ n are free.

Proof. The key point is that even if τ is not tracial, if a and u are freely independent then

τ [u∗au] = τ [u∗τ [a]u] + τ [u∗au]

= τ [a] τ [u∗u] +(τ [(u∗)au] + τ [u∗] τ [au] + τ [(u∗)a] τ [u] + τ [u∗] τ [a] τ [u]

)= τ [a] .

In particular, τ [a] = 0 if and only if τ [u∗au] = 0. The rest of the argument proceeds as in the solution tothe exercise.

By a similar argument, we can also get (how?) the following useful weakening of the hypothesis in thedefinition of free independence:

27

Page 30: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proposition 3.29. Let (A, ϕ) be an n.c. probability space. Subalgebras A1,A2, . . . ,Ak ⊂ (A, ϕ) arefreely independent with respect to ϕ if and only if whenever

u(1) 6= u(2) 6= u(3) 6= . . . 6= u(n),

ai ∈ Au(i) for 1 ≤ i ≤ n, and ϕ [ai] = 0 for 2 ≤ i ≤ n− 1, then

ϕ [a1a2 . . . an] = 0.

28

Page 31: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 4

Set partition lattices.

See Lectures 9, 10 of [NS06]. For general combinatorial background, see [Sta97]. The general theoryof incidence algebras, Mobius inversion etc. goes back to the series On the foundations of combinatorialtheory I–X by Gian-Carlo Rota and various co-authors.

4.1 All, non-crossing, interval partitions. Enumeration.

Let S be a finite ordered set. π = B1, B2, . . . , Bk is a (set) partition of S, π ∈ P(S) if

B1 ∪B2 ∪ . . . ∪Bk = S, Bi ∩Bj = ∅ for i 6= j, Bi 6= ∅.

Bi are the blocks or classes of π. We also write

iπ∼ j

if i and j are in the same block of π. Denote |π| the number of blocks of π.

Usually take S = [n] = 1, 2, . . . , n. Write P([n]) = P(n).

Partitions are partially ordered by reverse refinement: if

π = B1, . . . , Bk , σ = C1, . . . , Cr

thenπ ≤ σ ⇔ ∀i ∃j : Bi ⊂ Cj.

For example, if π = (1, 3, 5)(2)(4, 6) and σ = (1, 3, 5)(2, 4, 6), then π ≤ σ. The largest partition is

1 = 1n = (1, 2, . . . , n)

and the smallest one0 = 0n = (1)(2) . . . (n) .

29

Page 32: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

30

Page 33: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 4.1. P(3) has 5 elements: one largest, one smallest, and three which are not comparable to eachother.

Partitions form a lattice: for any π, σ ∈ P(S), there exists a largest partition τ such that τ ≤ π, τ ≤ σ,called the meet and denoted π∧σ; and the smallest partition τ such that π ≤ τ , σ ≤ τ , called the join anddenoted π ∨ σ. Clearly

iπ∧σ∼ j ⇔ i

π∼ j and i σ∼ j;

π ∨ σ can be thought of as the equivalence relation on S generated by π and σ. For example,if π = (1, 3, 5)(2, 4, 6) and σ = (1, 2, 3)(4, 5, 6), then π ∧ σ = (1, 3)(2)(4, 6)(5).If π = (1, 2)(3, 4)(5, 6)(7) and σ = (1)(2, 3)(4, 5)(6, 7), then π ∨ σ = 17.

Definition 4.2. A partition π ∈ P(S) of an ordered set S is a non-crossing partition if

iπ∼ j, k

π∼ l, i < k < j < l ⇒ iπ∼ k

π∼ jπ∼ l.

See Figure 4. Denote non-crossing partitions by NC(S), NC(n).

Note that 0, 1 ∈ NC(S). With the same operations ∧ and ∨ as above, NC(S) is a lattice. If π, σ ∈ NC(S)they have the same π∧σ inP(S) and in NC(S). However, for π = (1, 3)(2)(4) and σ = (1)(2, 4)(3),π ∨ σ = (1, 3)(2, 4) in P(4) but π ∨ σ = 14 in NC(4).

Definition 4.3. π ∈ NC(S) ⊂ P(S) is an interval partition if its classes are intervals. See Figure 4. Theyalso form a lattice. Denote them by Int(S), Int(n).

Exercise 4.4. DenoteS(n) = subsets of [n] .

Then S(n) is a lattice (by reverse inclusion), and

Int(n) ' S(n− 1).

For this reason, interval partitions are also called Boolean partitions.

Will study NC(n), but also P(n), Int(n), in a lot more detail.

Enumeration I.

Definition 4.5.B(n) = |P(n)| = Bell number.

S(n, k) = |π ∈ P(n) : |π| = k| = Stirling number of the second kind.

No formula.

31

Page 34: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Lemma 4.6. The Bell numbers satisfy a recursion relation

B(n+ 1) =n∑i=0

(n

i

)B(n− i), n ≥ 1,

where by convention B(0) = 1. Consequently, their exponential generating function is

F (z) =∞∑n=0

1

n!B(n)zn = ee

z−1.

Remark 4.7. eez−1 grows faster than exponentially, so B(n) grows faster than any an.

Proof of the Lemma. If π ∈ P(n + 1) and the block containing 1 contains i + 1 elements, there are(ni

)ways to choose the remaining i elements of the block, and the partition induced by π on the complementof this block is arbitrary. This gives the recursion. Differentiating the generating function term-by-term,

F ′(z) =∞∑n=0

1

n!B(n+ 1)zn =

∞∑n=0

1

n!

(n∑i=0

n!

i!(n− i)!B(n− i)

)zn

=∞∑i=0

1

i!zi∞∑j=0

1

j!B(j)zj = ezF (z).

Solving this differential equation with the initial condition F (0) = 1, we get the result.

Exercise 4.8. Derive a recursion for the Stirling numbers, and use it to show that their generating functionis

F (z, w) = 1 +∞∑n=1

n∑k=1

1

n!S(n, k)znwk = ew(ez−1).

Definition 4.9.cn = |NC(n)| = Catalan number.

Lemma 4.10. The Catalan numbers satisfy recursion relations

cn =n∑i=1

∑u(1),u(2)...,u(i)≥0

u(1)+u(2)+...+u(i)=n−i

cu(1)cu(2) . . . cu(i), n ≥ 1,

where by convention c0 = 1, and

cn =n−1∑k=0

ckcn−k−1.

32

Page 35: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Their ordinary generating function is

F (z) =∞∑n=0

cnzn =

1−√

1− 4z

2z.

Consequently,

cn =1

n+ 1

(2n

n

)=

1

2n+ 1

(2n+ 1

n

).

Proof. If the first block of π ∈ NC(n) is

(1, 2 + u(1), 3 + u(1) + u(2), . . . , i+ u(1) + . . .+ u(i)),

we can complete it to a non-crossing partition by choosing arbitrary non-crossing partitions on each ofthe i intervals in the complement of this block. This proves the first recursion. Next,

F (z) = 1 +∞∑n=1

cnzn = 1 +

∞∑n=1

n∑i=1

∑u(1),u(2)...,u(i)≥0

u(1)+u(2)+...+u(i)=n−i

cu(1)cu(2) . . . cu(i)zu(1)zu(2) . . . zu(i)zi

= 1 +∞∑i=1

zi∞∑

u(1)=0

cu(1)zu(1) . . .

∞∑u(i)=0

cu(i)zu(i) = 1 +

∞∑i=1

zF (z) =1

1− zF (z).

ThuszF (z)2 − F (z) + 1 = 0,

and

F (z) =1−√

1− 4z

2z.

It is easy to see that the second recursion leads to the same quadratic equation, and so holds as well.Finally, using the binomial series,

F (z) = − 1

2z

∞∑k=1

(1/2

k

)(−4z)k =

∞∑k=1

(2k − 2)!

(k − 1)!k!zk−1 =

∞∑n=0

(2n)!

n!(n+ 1)!zn,

which leads to the formula for cn.

4.2 Incidence algebra. Multiplicative functions.

Remark 4.11 (Incidence algebra). For now, L is a general (locally finite) lattice. Let

I = functions on L

33

Page 36: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

andI2 = functions on (σ, π) : σ, π ∈ L, σ ≤ π .

Then I2 is the incidence algebra under convolution

(f ∗ g)(σ, π) =∑σ≤τ≤π

f(σ, τ)g(τ, π).

Moreover, I2 acts on I by convolution: if f ∈ I and g ∈ I2, then f ∗ g ∈ I via

(f ∗ g)(π) =∑τ≤π

f(τ)g(τ, π).

Thus may identify f(σ) = f(0, σ).

It is easy to see that I2 has the identity element

δ(σ, π) =

1, σ = π

0, σ 6= π.

Another important element isζ(σ, π) = 1

for all σ ≤ π. In particular, for f ∈ I ,

(f ∗ ζ)(π) =∑σ≤π

f(σ).

Mobius inversion.

Definition 4.12. The Mobius function µ ∈ I2 is defined by

ζ ∗ µ = µ ∗ ζ = δ.

In our context, µ exists by general results. We will compute it later.

Proposition 4.13 (Partial Mobius inversion). Let f, g ∈ I . Suppose for all π ∈ L,

f(π) =∑σ≤π

g(σ).

a. Mobius inversion: for all π ∈ L,

g(π) =∑σ≤π

f(σ)µ(σ, π).

34

Page 37: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

b. Partial Mobius inversion: for any ω, π ∈ L,∑ω∨τ=π

g(τ) =∑

ω≤σ≤π

f(σ)µ(σ, π).

Note two particular cases.

Proof.f(π) =

∑0≤σ≤π

g(σ) =∑

0≤σ≤π

g(σ)ζ(σ, π) = (g ∗ ζ)(π).

Thereforeg(π) = (g ∗ ζ ∗ µ)(π) = (f ∗ µ)(π) =

∑σ≤π

f(σ)µ(σ, π).

More generally,∑ω≤σ≤π

f(σ)µ(σ, π) =∑

ω≤σ≤π

∑τ≤σ

g(τ)µ(σ, π) =∑τ∈L

g(τ)∑

(ω∨τ)≤σ≤π

ζ(ω ∨ τ, σ)µ(σ, π)

=∑τ∈L

g(τ)δ(ω ∨ τ, π) =∑ω∨τ=π

g(τ).

Multiplicative functions.

f(σ, π) or even f(π) too complicated. Want multiplicative functions.

Remark 4.14. In a lattice L, denote

[σ, π] = τ ∈ L : σ ≤ τ ≤ π .

In P(S),[0, π] '

∏Vi∈π

P(Vi) '∏Vi∈π

P(|Vi|)

and [π, 1] ' P(|π|). More generally, if σ ≤ π,

[σ, π] '∏Vi∈π

P(|σ|Vi |).

In NC(S),[0, π] '

∏Vi∈π

NC(Vi) '∏Vi∈π

NC(|Vi|).

But what is [π, 1]? For example,

[(1, 6)(2, 5)(3, 4) , 16] 6' NC(3).

35

Page 38: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Definition 4.15. Let π ∈ NC(n). Its (right) Kreweras complement K[π] ∈ NC(n) is the largest partitionin NC(1, 2, . . . , n) such that

π ∪K[π] ∈ NC(1, 1, 2, 2, . . . , n, n).

Similarly, have the left Kreweras complement. See Figure 4.

Lemma 4.16. Let K be the Kreweras complement on NC(n).

a. K−1 is the left Kreweras complement.

b. K is an anti-isomorphism of lattices.

c. |K[π]|+ |π| = n+ 1.

Proof.

a. Picture.

b. σ ≤ π if and only if K[σ] ≥ K[π].

c. Any element of NC(n) can be included in a maximal chain, all of which look like

0n σn−1 . . . σ3 σ2 1n,

where |σk| = k. K maps it to a maximal chain

1n K[σn−1] . . . K[σ3] K[σ2] 0n

with |K[σk]| = n− k + 1.

Corollary 4.17. In NC(n),[π, 1] ' [0, K[π]] '

∏Vi∈K[π]

NC(Vi)

and more generally[σ, π] '

∏Vi∈π

[σ|Vi , 1Vi ] '∏Vi∈π

∏Uj∈K[σ|Vi ]

NC(Uj).

Exercise 4.18. Identify [σ, π] in Int(n).

Definition 4.19. Given a family of lattices Li : i ∈ N, a function f on

(σ, π) : σ ≤ π, σ, π ∈ Li, i ∈ N

36

Page 39: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

is multiplicative if there exist fixed fi : i ∈ N such that whenever

[σ, π] 'n∏i=1

Lkii ,

then

f(σ, π) =n∏i=1

fkii .

Conversely, given any fi : i ∈ N, can use this formula to define a multiplicative function f .

Example 4.20. ζ(σ, π) = 1. So ζ is multiplicative, with ζn = 1 for all n ∈ N.

δ(σ, π) = δσ=π. Note that [π, π] ' L1. So δ is multiplicative, with δ1 = 1, δn = 0 for n ≥ 2.

Remark 4.21. By general theory, the Mobius function is also always multiplicative. In fact, in general

µL×L′ = µLµL′ .

37

Page 40: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 5

Free cumulant machinery.

See Lectures 8, 11, 12, 16 of [NS06].

5.1 Free cumulants.

Definition 5.1. Let (A, ϕ) be an n.c. probability space. For each n, define the moment functional

M : A×A× . . .×A︸ ︷︷ ︸n

→ C

byM [a1, a2, . . . , an] = ϕ [a1a2 . . . an] .

For each n, M is a multilinear functional. Combine these into a multiplicative function

Mπ[a1, a2, . . . , an] =∏V ∈π

M [ai : i ∈ V ]

for π ∈ NC(n).

Example 5.2.

M(1,4)(2,3)[a1, a2, a3, a4] = M [a1, a4]M [a2, a3] = ϕ [a1a4]ϕ [a2a3] .

Definition 5.3. In the context of the preceding definition, define the partitioned free cumulants

Rπ[a1, a2, . . . , an]

of a1, . . . , an viaM = R ∗ ζ, R = M ∗ µ.

38

Page 41: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

That is,Mπ[a1, a2, . . . , an] =

∑σ≤π

Rσ[a1, a2, . . . , an]

andRπ[a1, a2, . . . , an] =

∑σ≤π

Mσ[a1, a2, . . . , an]µ(σ, π).

In particular, for π = 1, we have the moment-cumulant formulas

R[a1, a2, . . . , an] =∑

σ∈NC(n)

Mσ[a1, a2, . . . , an]µ(σ, 1),

andM [a1, a2, . . . , an] =

∑σ∈NC(n)

Rσ[a1, a2, . . . , an],

whereRσ[a1, a2, . . . , an] =

∏V ∈σ

R[ai : i ∈ V ].

Example 5.4.R[a] = M [a] = ϕ [a]

is the mean of a.M [a, b] = R[a, b] +R[a]R[b],

soR[a, b] = ϕ [ab]− ϕ [a]ϕ [b]

is the covariance of a and b.

M [a, b, c] = R[a, b, c] +R[a]R[b, c] +R[a, c]R[b] +R[a, b]R[c] +R[a]R[b]R[c].

Notation 5.5.R[a1, . . . , an] = Rϕ[a1, . . . , an]

is a multilinear functional on (A, ϕ). If µ ∈ D(d) is the joint distribution of (a1, . . . , ad), then x1, . . . , xd ∈C〈x1, . . . , xd〉 also have joint distribution µ with respect to µ. So can consider

Rµ[xu(1), xu(2), . . . , xu(n)].

These satisfy

ϕ[au(1)au(2) . . . au(n)

]= µ

[xu(1)xu(2) . . . xu(n)

]=

∑π∈NC(n)

Rµπ[xu(1), xu(2), . . . , xu(n)].

For µ ∈ D(d), can define Rµ ∈ C〈x1, . . . , xd〉∗ (not necessarily positive) via

Rµ[xu(1)xu(2) . . . xu(n)

]= Rµ[xu(1), xu(2), . . . , xu(n)].

39

Page 42: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

In one variable, we have the moments

mn[µ] = µ [xn] = ϕ [an]

and the moment-cumulant formula

mn[µ] =∑

π∈NC(n)

rπ[µ] =∑

π∈NC(n)

∏V ∈π

r|V |[µ].

Herern[µ] = Rµ[xn] = Rµ[x, x, . . . , x].

Finally, we have the free cumulant generating function (combinatorial R-transform)

Rµ(z) =∞∑n=1

rn[µ]zn

and more generally for µ ∈ D(d)

Rµ(z1, z2, . . . , zd) =∞∑n=1

d∑u(1),...,u(n)=1

Rµ[xu(1)xu(2) . . . xu(n)]zu(1)zu(2) . . . zu(n)

for non-commuting indeterminates z1, z2, . . . , zd.

Proposition 5.6. The moment and free cumulant generating functions are related by

M(z) = 1 +R(zM(z))

(recall that M has a constant term while R does not), and more generally

M(z1, . . . , zd) = 1 +R(z1M(z), . . . , zdM(z)

)= 1 +R

(M(z)z1, . . . ,M(z)zd

),

where

M(z1, . . . , zd) = 1 +∞∑n=1

∑|~u|=n

M [x~u]z~u.

40

Page 43: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof. Very similar to the enumeration argument.

M [xu(1), xu(2), . . . , xu(n)]z~u =∑

π∈NC(n)

Rπ[x~u]z~u

=n∑k=1

∑S⊂[n]

S=1=v(1),v(2),...,v(k)v(k+1)=n+1

R[xu(v(j)) : 1 ≤ j ≤ k]k∏j=1

zu(v(j))

∑πj∈NC(Vj=[v(j)+1,v(j+1)−1])

Rπj [xu(i) : i ∈ Vj]∏i∈Vj

zu(i)

=

n∑k=1

∑S⊂[n]

S=1=v(1),v(2),...,v(k)v(k+1)=n+1

R[xu(v(j)) : 1 ≤ j ≤ k]k∏j=1

zu(v(j))M [xu(v(j)+1), . . . , xu(v(j+1)−1)]∏i∈Vj

zu(i)

.

Therefore

M(z) = 1 +∞∑k=1

∑|~w|=k

R[x~w]zw(1)M(z)zw(2)M(z) . . . zw(k)M(z)

= 1 +R(z1M(z), z2M(z), . . . , zdM(z)

).

The second identity is obtained by starting with the block containing n.

Remark 5.7. Classical (tensor) cumulants are similarly defined via

mn[µ] =∑

π∈P(n)

kπ[µ]

and Boolean cumulants byM [a1, . . . , an] =

∑π∈Int(n)

Bπ[a1, . . . , an].

For the exponential moment generating function

Mµexp(z) =

∞∑n=0

1

n!mn[µ]zn,

can show as before that the cumulant generating function is

`(z) =∞∑n=1

1

n!kn[µ]zn = logMexp(z).

41

Page 44: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Exercise 5.8. For the Boolean cumulant generating function

B(z) =∞∑n=1

∑|~u|=n

B[xu(1), . . . , xu(n)]z~u,

show that1−B(z) = M(z)−1

(inverse with respect to multiplication).

Enumeration II.

Proposition 5.9. On NC(n), the Mobius function is given by

µn = (−1)n−1cn−1,

where cn is the Catalan number.

Proof. Recall µ ∗ ζ = δ. Thus

∑π∈NC(n)

∏V ∈π

µ|V | =∑

π∈NC(n)

µ(0, π)ζ(π, 1) = δ(0n, 1n) =

1, n = 1,

0, n ≥ 2.

It follows that µn : n ∈ N are the free cumulants of a “distribution” (not positive) withm1 = 1, mn = 0for n ≥ 2. Thus for F (z) =

∑∞n=1 µnz

n, we have

1 + z = 1 + F (z(1 + z)).

If w = z2 + z, then z = −1+√

1+4w2

and

F (w) =∞∑n=1

µnwn =−1 +

√1 + 4w

2=∞∑n=1

(−1)ncn−1wn

as before.

Exercise 5.10. Use Remark 5.7 and Exercise 5.8 to compute the Mobius functions on P(n) and Int(n).

Definition 5.11. A multi-chain in a lattice is a linearly ordered subset

0 ≤ σ1 ≤ σ2 ≤ . . . ≤ σp ≤ 1,

possibly with repetitions.

42

Page 45: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proposition 5.12. The number M(n, p) of multi-chains of length p in NC(n) is the Fuss-Catalan number

1

n

((p+ 1)n

n− 1

).

Proof. Note that|NC(n)| =

∑0n≤σ1≤1n

1 = (ζ ∗ ζ)(0n, 1n).

Similarly,M(n, p) =

∑0≤σ1≤σ2≤...≤σp≤1

1 = (ζ ∗ ζ ∗ . . . ∗ ζ︸ ︷︷ ︸p+1

)(0n, 1n) = (ζ∗(p+1))n.

If Fp(z) = 1 +∑∞

n=1(ζ∗(p+1))nzn, then F0(z) = 1

1−z and Fp(z) = Fp−1(zFp(z)). It follows by inductionthat Fp satisfies z(Fp(z))p+1 = Fp(z)− 1, since

Fp(z)− 1 = Fp−1(zFp(z))− 1 = (zFp(z))(Fp−1(zFp(z)))p = z(Fp(z))p+1.

It is known that the coefficients in the expansion of the solution of this equation are the Fuss-Catalannumbers.

Proposition 5.13. The number of non-crossing partitions in NC(n) with ki blocks of size i is the multino-mial coefficient

N(n; ki, i ∈ N) =n!∏

i∈N ki!(n+ 1−∑

i∈N ki)!.

For the proof, see Lecture 9 from [NS06], or an argument in [Spe98] using the Lagrange inversion formula.

Distributions.

Recall: Stieltjes inversion formula

dµ(x) = − 1

πlimy→0+

=Gµ(x+ iy) dx,

where

G(z) =1

zM(1/z) =

∞∑n=0

mn1

zn+1

is the Cauchy transform. Define the analytic R-transform

R(z) =1

zR(z) =

∞∑n=1

rnzn−1 = r1 + r2z + r3z

2 + . . . .

43

Page 46: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Exercise 5.14. Use Proposition 5.6 to show that (as formal power series)

G

(1

z+R(z)

)= z, i.e. R(z) = G−1(z)− 1

z,

inverse with respect to composition.

Example 5.15. Let r1[µt] = t, rn[µt] = 0 for n 6= 1. Then mn[µt] = tn and µt = δt.

Lemma 5.16. Denote NC2(n) the non-crossing pair partitions. Then

|NC2(2n)| = cn = |NC(n)| .

Proof. By considering the n choices of elements in [2n] to which 1 can be paired by a non-crossingpartitions, it is easy to see that |NC2(2n)| satisfy the second recursion for the Catalan numbers.

Remark 5.17. A direct bijection NC(n) and NC2(2n) is provided by the map

π 7→ K[π ∪K[π]].

See Exercise 9.42 in [NS06].

Example 5.18. Let r2[µt] = t, rn[µt] = 0 for n 6= 2. Then

m2n[µt] = tn |NC2(2n)| = tncn

and m2n+1[µt] = 0. Therefore

Mµt(z) =1−√

1− 4tz2

2tz2,

Gµt(z) =z −√z2 − 4t

2t,

and sodµt(x) =

1

2πt

√4t− x21[−2

√t,2√t].

These are the semicircular distributions, which we will denote by S(0, t). The law for t = 1 is the Wignersemicircle law. Figure omitted. We will see that it is a free analog of the Gaussian (normal) distribution.A random variable with a semicircular distribution is a semicircular element.

Example 5.19. Let rn[µt] = t for all n. Thus its R-transform is

R(z) =∞∑n=1

tzn−1 =t

1− z.

Then1

z+R(z) =

1 + (t− 1)z

z − z2,

44

Page 47: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

and so

Gµt(z) =z + 1− t−

√(z − (t+ 1))2 − 4t

2z.

From this one can deduce that

dµ(x) =1

√4t− (x− (t+ 1))2

2x+ max(1− t, 0)δ0,

the distribution being supported on the interval [(√t− 1)2, (

√t+ 1)2] and has an atom at 0 for 0 < t < 1.

These are the free Poisson distributions, also called the Marchenko-Pastur distributions, which we willdenote by πt. Of particular interest is the free Poisson distribution with parameter t = 1,

dµ(x) =1

√4− xx

on [0, 4]. Note thatmn[π1] = |NC(n)| = cn = m2n[S(0, 1)].

Thus if s has semicircular distribution, s2 has free Poisson distribution. Thus “free χ2 = free Poisson”.

Remark 5.20. Note that in the previous example,

mn[µt] =∑

π∈NC(n)

t|π| =n∑k=1

tk |π ∈ NC(n) : |π| = k| .

Since we know the generating function of these moments, from this we may deduce that

|π ∈ NC(n) : |π| = k| = N(n, k) =1

n

(n

k

)(n

k − 1

)are the Narayana numbers.

5.2 Free independence and free cumulants.

Free cumulants of products.

Let a1, a2, . . . , an ∈ (A, ϕ). Group these into consecutive products: choose

1 ≤ v(1) < v(2) < . . . < v(k) = n,

and letA1 = a1 . . . av(1),

A2 = av(1)+1 . . . av(2)

45

Page 48: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

up toAk = av(k−1)+1 . . . an.

In other words, for Vj = [v(j − 1) + 1, v(j)] (where as usual v(0) = 0), Aj =∏

i∈Vj ai. Clearly

a1 . . . an = A1 . . . Ak.

SoM [A1, A2, . . . , Ak] = ϕ [A1A2 . . . Ak] = M [a1, a2, . . . , an].

What about R[A1, A2, . . . , Ak] in terms of Rπ[a1, a2, . . . , an]?

Letσ = V1, V2, . . . , Vk ∈ Int(n).

Theorem 5.21.R[A1, A2, . . . , Ak] =

∑π∈NC(n)

π∨σ=1n

Rπ[a1, a2, . . . , an].

Here π ∨ σ = 1n means any two A’s are eventually connected via π. More generally,

Rτ [A1, A2, . . . , Ak] =∑

π∈NC(n)π∨σ=τ

Rπ[a1, a2, . . . , an].

Example 5.22.

R[a1a2, a3] = R(1,2,3) +R(1,3)(2) +R(1)(2,3) = R[a1, a2, a3] +R[a1, a3]R[a2] +R[a1]R[a2, a3].

Remark 5.23. Idea of proof:

R[A1, A2, . . . , Ak] =∑

π∈NC(k)

Mπ[A1, . . . , Ak]µ(π, 1)

and each of these π induces a partition of [n].

Given a fixed σ = V1, V2, . . . , Vk ∈ Int(n), define the map

NC(k)→ NC(n), τ 7→ τ

by

τ = U1, U2, . . . , Uj 7→

⋃i∈U1

Vi,⋃i∈U2

Vi, . . . ,⋃i∈Uj

Vi

.

For example, if σ = (1)(2, 3, 4)(5, 6) ∈ Int(6) and τ = (1, 2)(3) ∈ NC(3), then

τ = (1, 2, 3, 4)(5, 6) ∈ NC(6).

If instead τ = (1, 3)(2) then τ = (1, 5, 6)(2, 3, 4).Then

46

Page 49: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

a. τ ∈ NC(n).

b. The map τ 7→ τ is injective, with 0k 7→ σ, 1k 7→ 1n.

c. τ 7→ τ preserves the partial order and its image is

π ∈ NC(n) : π ≥ σ = [σ, 1n].

Thus [0k, 1k] ' [σ, 1n]. In particular, µ(σ, π) = µ(σ, π), where these are two Mobius functions ondifferent lattices.

Proof of the Theorem.

Rτ [A1, A2, . . . , Ak] =∑

π∈NC(k)π≤τ

Mπ[A1, . . . , Ak]µ(π, τ) =∑

π∈NC(k)π≤τ

Mπ[a1, a2, . . . , an]µ(π, τ).

Denoting ω = π and using partial Mobius inversion (Proposition 4.13), the expression continues as

=∑

ω∈NC(n)σ≤ω≤τ

Mω[a1, a2, . . . , an]µ(ω, τ) =∑

π∈NC(n)π∨σ=τ

Rπ[a1, a2, . . . , an].

Free independence as vanishing mixed free cumulants.

Theorem 5.24. Let A1,A2, . . . ,Ak ⊂ (A, ϕ). They are freely independent with respect to ϕ if and onlyif “mixed free cumulants vanish”, that is, for all u(1), u(2), . . . , u(n), ai ∈ Au(i),

R[a1, a2, . . . , an] 6= 0 ⇒ u(1) = u(2) = . . . = u(n).

Remark 5.25. No centered or alternating condition.

Remark 5.26. Any block of maximal depth in a non-crossing partition is an interval block.

Proof of the easy direction. Suppose all the mixed free cumulants are zero. Let u(1) 6= u(2) 6= . . . 6=u(n), ai ∈ Ai, ϕ [ai] = 0. Write

ϕ [a1a2 . . . an] =∑

π∈NC(n)

Rπ[a1, a2, . . . , an].

By the preceding remark, Rπ has a factor

R[ai, ai+1, . . . , aj].

If i < j, this is a mixed free cumulant, and so is zero. If i = j, then R[ai] = ϕ [ai], and so also is zero. Itfollows that each term in the sum is zero.

47

Page 50: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Lemma 5.27. If n ≥ 2 and any of a1, a2, . . . , an is a scalar, then

R[a1, a2, . . . , an] = 0.

Proof. Without loss of generality, an = 1. The proof is by induction on n. For n = 2,

R[a, 1] = ϕ [a1]− ϕ [a]ϕ [1] = 0.

Assuming the result for k < n,

ϕ [a1a2 . . . an−11] = R[a1, a2, . . . , an−1, 1] +∑π 6=1n

Rπ[a1, a2, . . . , an−1, 1]

= R[a1, a2, . . . , an−1, 1] +∑

π∈NC(n−1)

Rπ[a1, a2, . . . , an−1]R[1],

where we have used the induction hypothesis in the second step. On the other hand, this expression is alsoequal to

ϕ [a1a2 . . . an−1] =∑

π∈NC(n−1)

Rπ[a1, a2, . . . , an−1].

Since R[1] = 1, it follows that R[a1, a2, . . . , an−1, 1] = 0.

Proof of the hard direction of the theorem. Suppose A1, . . . ,Ak are freely independent and ai ∈ Au(i).First write

R[a1, . . . , an] = R[a1 + ϕ [a1] , . . . , an + ϕ [an]].

Since R is multilinear, and using the lemma, we may assume that ai = ai , ϕ [ai] = 0. The rest of theproof is by induction on n. For n = 2, mixed cumulants condition says u(1) 6= u(2), so by freeness

R[a1, a2] = ϕ [a1a2]− ϕ [a1]ϕ [a2] = 0.

For general n, if u(1) 6= u(2) 6= . . . 6= u(n), then

R[a1, . . . , an] = ϕ [a1 . . . an]−∑π 6=1n

Rπ[a1, . . . , an] = 0

since the first term is zero by freeness, and the second by the induction hypothesis. Finally, if not all butsome of the u(i)’s are different, we may group them so that

u(1) = . . . = u(v(1)),

u(v(1) + 1) = . . . = u(v(2))

u(v(j − 1) + 1) = . . . = u(n),

48

Page 51: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

with u(v(1)) 6= u(v(2)) 6= u(v(3)) 6= . . . 6= u(n). Write

A1 = a1 . . . , av(1), . . . , Aj = av(j−1) . . . an,

with j < n. By the induction hypothesis, R[A1, A2, . . . , Aj] = 0. But by Theorem 5.21, this expressionalso equals ∑

π∨σ=1n

Rπ[a1, a2, . . . , an]

for a certain σ ∈ Int(n), σ 6= 1n. Take π 6= 1n. By the induction hypothesis, in any non-zero term inthe sum, all the ai’s in the same block of π have to belong to the same subalgebra. Also by construction,all the ai’s in the same block of σ belong to the same subalgebra. But then the condition π ∨ σ = 1nimplies that all the ai’s belong to the same subalgebra, contradicting the assumption. It follows that allRπ[a1, a2, . . . , an] = 0 for π 6= 1n, and so also for π = 1n.

5.3 Free convolution, distributions, and limit theorems.

Corollary 5.28. The free cumulant functional is a linearizing transform for the additive free convolution,in the sense that

Rµν = Rµ +Rν .

Proof.(µ ν)

[xu(1), xu(2), . . . , xu(n)

]= ϕ

[Xu(1) + Yu(1), . . . , Xu(n) + Yu(n)

],

where Xidi=1 , Yidi=1 ⊂ (A, ϕ) are free, µ = ϕX1,...,Xd , and ν = ϕY1,...,Yd . Thus

Rµν [xu(1), . . . , xu(n)] = Rϕ[Xu(1) + Yu(1), . . . , Xu(n) + Yu(n)]

=∑

Rϕ[Xu(1) or Yu(1), . . . , Xu(n) or Yu(n)]

= Rϕ[Xu(1), . . . , Xu(n)] +Rϕ[Yu(1). . . . , Yu(n)]

= Rµ[xu(1), . . . , xu(n)] +Rν [xu(1), . . . , xu(n)],

where we used freeness in the middle step.

Note that the same property holds for the free cumulant generating function and for the R-transform.

Remark 5.29. Random variables are classically independent if and only if their mixed classical cumulantsare zero. The cumulant generating function satisfies

`µ∗ν = `µ + `ν ,

where µ∗ν is the classical convolution, which plays for independence the same role as the free convolutionplays for free independence.

49

Page 52: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Exercise 5.30. Let A1,A2, . . . ,Ak ⊂ (A, ϕ) be subalgebras not containing the unit of A. They areBoolean independent if for any u(1) 6= u(2) 6= . . . 6= u(n) and ai ∈ Au(i),

ϕ [a1a2 . . . an] = ϕ [a1]ϕ [a2] . . . ϕ [an] .

a. Show that this definition gives a degenerate structure if the subalgebras were allowed to contain theunit.

b. Show that Boolean independence is equivalent to the vanishing of mixed Boolean cumulants.

Part (b) implies in particular (you don’t need to prove it, the argument is exactly the same as above) thatBµ]ν = Bµ+Bν . Here ] is the Boolean convolution onD(d): if a, b have distributions µ, ν, respectively,and are Boolean independent, then the distribution of a+ b is µ ] ν.

Example 5.31 (Symmetric random walk on a free group). Recall the setting from Example 2.25: let Fnbe the free group with generators x1, . . . , xn. In (C[Fn], τ), have the generating elements ui = δxi , all ofwhich are Haar unitaries. Consider the element

X =1

2n

((u1 + u−1

1 ) + (u2 + u−12 ) + . . .+ (un + u−1

n )).

Note that

mk[X] = probability of the return to e after k steps in a simple random walk on Fn starting at e.

So want to find the distribution of X . Know that each ui + u∗i has the arcsine distribution, and they arefreely independent. Thus

Gui+u∗i(z) =

1√z2 − 4

and

Rui+u∗i(z) =

√4 +

1

z2− 1

z.

So

RX(z) =1

2n

(n

√4 +

1

(z/2n)2− n

z/2n

)=

√1 +

n2

z2− n

z

and1

z+RX(z) =

√1 +

n2

z2− n− 1

z.

Then

GX(z) =−(n− 1)z +

√n2z2 − (2n− 1)

z2 − 1

and

dµX(x) =1

π

√(2n− 1)− n2x2

1− x2dx.

These are the Kesten-McKay distributions.

50

Page 53: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Exercise 5.32. Letµ =

1

2δ−a +

1

2δa

be a symmetric Bernoulli distribution. Compute µ µ. Conclude that a free convolution of two discretedistributions may be continuous.

Limit theorems.

Theorem 5.33 (Free central limit theorem). Let Xn : n ∈ X ⊂ (A, ϕ) be freely independent, identicallydistributed, mean zero, variance t

ϕ [Xi] = 0, ϕ[X2i

]= t.

ThenSn =

X1 + . . .+Xn√n

converges in distribution to S(0, t), the semicircular distribution with mean 0 and variance t.

Equivalently, denoting by Dα the dilation operator∫Rf(x) d(Dαµ)(x) =

∫Rf(αx) dµ(x)

and µ = µXi ,D1/

√n(µ µ . . . µ︸ ︷︷ ︸

n

)→ S(0, t).

Proof. Note first that by definition mn[Dαµ] = αnmn[µ], from which it follows that rn[Dαµ] = αnrn[µ].

Since S(0, t) is compactly supported, to prove convergence in distribution it suffices to prove convergencein moments,

ϕ[Skn]→∫Rxk dS(0, t).

By the moment-cumulant formula, it is in fact enough to prove the convergence of free cumulants, that is

rk[Sn]→ rk[S(0, t)] = tδk=2.

Indeed,

r1[Sn] =nr1[X]√

n= 0,

r2[Sn] =nr2[X]

n= t,

and

rk[Sn] =nrk[X]

nk/2→ 0, k > 2.

51

Page 54: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 5.34. Can replace the “identically distributed” condition with a “uniformly bounded moments”condition. See [NS06] for an extensive discussion.

Theorem 5.35 (Poisson limit theorem).((1− t

n

)δ0 +

t

nδ1

)n

→ πt,

the free Poisson distribution with parameter t.

Again, the same conclusion can be obtained under weaker assumptions.

Remark 5.36. If µX = (1 − α)δ0 + αδ1, then mn[X] = α for all n ≥ 1 (m0[X] = 1). Thus X2 and Xhave the same distribution. May take

X2 = X = X∗,

so that X is an (orthogonal) projection, ϕ [X] = α.

Proof. Again, it suffices to show that

nrk

[(1− t

n

)δ0 +

t

nδ1

]= rk

[((1− t

n

)δ0 +

t

nδ1

)n]→ rk[πt] = t.

Denote µn =(1− t

n

)δ0 + t

nδ1. Note that mk[µn] = t

n. So by the moment-cumulant formula,

rk[µn] =t

n+ o

(1

n

),

andnrk[µn] = t+ o(1)→ t

as n→∞.

Remark 5.37 (Generalized Poisson limit theorem). Let ν ∈ D(d) be a state on C〈x1, . . . , xd〉. Denote

µn =

((1− t

n

)δ0 +

t

)∈ D(d).

Then for any |~u| = k ≥ 1,

Mµn [x~u] =t

nν[x~u].

Therefore the same argument as above gives

Rµnn [x~u] = nRµn [x~u] = n

(t

nν[x~u] + o

(1

n

))→ tν[x~u].

52

Page 55: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

We have thus proved the existence of the limit

limn→∞

((1− t

n

)δ0 +

t

)n

= πν,t,

which is called the free compound Poisson distribution. In particular, πt = πδ1,t. Note that since a limitof states is a state, we have proved that for any ν ∈ D(d) there exists πν,t ∈ D(d) such that

Rπν,t = tν.

We will later extend this result.

53

Page 56: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 6

Lattice paths and Fock space models.

See Lectures 2, 7, 8, 9, 13, 21 of [NS06]. For the background on lattice paths, see [Fla80], [Vie84],[Vie85].

6.1 Dyck paths and the full Fock space.

Lattice paths.

Piece-wise linear paths with vertices in Z× Z.

Each step: one unit to the right, −1, 0 ∪ N units up.

(1, n) rising or NE step.

(1, 0) flat or E step.

(1,−1) falling of SE step.

Start at (0, 0) at height 0. Never go below the x-axis.

In most cases, end on the x axis at height 0.

Combinatorics: move to the right. Fock spaces: flip, move to the left.

Paths needed for enumeration: attach to each step a weight or a label w(step) = a symbol, number,operator, etc.

w(path) =∏

w(step).

Often look at ∑paths

w(path) =∑paths

∏w(step).

Definition 6.1. Dyck paths start and end at height 0, and consist of steps: (1, 1) (rise), (1,−1) (fall).Incomplete Dyck paths: same conditions except ending at the height ≥ 0.

54

Page 57: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

55

Page 58: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

56

Page 59: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Lemma 6.2. Dyck paths with 2n steps are in a bijective correspondence with NC2(2n).

Proof. See Figure 6. Rising step corresponds to an opener, falling step to a closer in a partitions. Keypoint: a Dyck path corresponds to a unique non-crossing partition.

Corollary 6.3. |Dyck paths(2n)| = cn, the Catalan number. Moreover, if we put on the steps the weightsw(rise) = 1, w(fall) = t, then ∑

Dyck paths(2n)

w(path) = tncn = m2n[S(0, t)].

In fact, much more is true.

Full Fock space.

Let HR be a real Hilbert space, H = HR ⊗R C its complexification. The algebraic full Fock space is thevector space

Falg(H) =∞⊕n=0

H⊗n = CΩ⊕H⊕ (H⊗H)⊕ (H⊗H⊗H)⊕ . . . .

Here Ω is called the vacuum vector. The full Fock space is the completion of Falg(H) with respect to thenorm coming from the inner product

〈f1 ⊗ f2 ⊗ . . .⊗ fn, g1 ⊗ g2 ⊗ . . .⊗ gk〉 = δn=k

n∏i=1

〈fi, gi〉H .

Let f ∈ HR. Define the (free) creation operator `(f) and (free) annihilation operator `∗(f) on Falg(H)by

`(f)(g1 ⊗ g2 ⊗ . . .⊗ gn) = f ⊗ g1 ⊗ g2 ⊗ . . .⊗ gn,

`(f)Ω = f ,`∗(f)(g1 ⊗ g2 ⊗ . . .⊗ gn) = 〈g1, f〉 g2 ⊗ . . .⊗ gn,

`∗(f)(g) = 〈g, f〉Ω,

`∗(f)Ω = 0.

Remark 6.4. Frequently denote the creation operator by a∗ or a+, annihilation operator by a or a−.

Lemma 6.5. `∗(f) is the adjoint of `(f).

57

Page 60: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof.

〈`(f)(g1 ⊗ . . .⊗ gn, h0 ⊗ h1 ⊗ . . .⊗ hn〉 = 〈f ⊗ g1 ⊗ . . .⊗ gn, h0 ⊗ h1 ⊗ . . .⊗ hn〉

= 〈f, h0〉n∏i=1

〈gi, hi〉

=⟨g1 ⊗ . . .⊗ gn, 〈f, h0〉h1 ⊗ . . .⊗ hn

⟩= 〈g1 ⊗ . . .⊗ gn, `∗(f)(h0 ⊗ h1 ⊗ . . .⊗ hn)〉 .

Therefore for each f ∈ HR,X(f) = `(f) + `∗(f)

is a symmetric operator. Consider the n.c. probability space

(A = Alg∗ (X(f) : f ∈ HR) , ϕ = 〈·Ω,Ω〉).

ϕ is the vacuum expectation.

One can check (using the commutant) that ϕ faithful. It follows from Corollary 6.8 below that, becausef ∈ HR, ϕ is tracial.

Proposition 6.6. Using right-to-left Dyck paths,

X(f1)X(f2) . . . X(fn)Ω =∑

incomplete Dyck paths(n)

w(path)fu(1) ⊗ fu(2) ⊗ . . .⊗ fu(k),

where the weights are as follows: w(rise) = 1; if the i’th step is a fall matched with a rise on step j, thenw(fall) = 〈fi, fj〉, and the unmatched rises are in positions u(1) < u(2) < . . . < u(k).

Example 6.7.X(f1)Ω = f1,

X(f1)X(f2)Ω = X(f1)f2 = f1 ⊗ f2 + 〈f1, f2〉ΩSee Figure 6.

Note: since all fi ∈ HR, 〈fi, fj〉 = 〈fj, fi〉.

Proof. By induction,

X(f0)∑

incomplete Dyck paths(n)

w(path)fu(1) ⊗ fu(2) ⊗ . . .⊗ fu(k)

=∑

incomplete Dyck paths(n)

w(path)f0 ⊗ fu(1) ⊗ fu(2) ⊗ . . .⊗ fu(k)

+∑

incomplete Dyck paths(n)

w(path)⟨fu(1), f0

⟩fu(2) ⊗ . . .⊗ fu(k)

=∑

incomplete Dyck paths(n+1)

w(path)fu(1) ⊗ fu(2) ⊗ . . .⊗ fu(k).

58

Page 61: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

A different argument was given in class.

Corollary 6.8 (Free Wick formula).

a.

ϕ [X(f1)X(f2) . . . X(fn)] =∑

Dyck paths(n)

w(path) =∑

π∈NC2(n)π=(u(1),v(1))...(u(k),v(k))

u(i)<v(i)

k∏i=1

⟨fu(i), fv(i)

(again, order in the inner products irrelevant).

b.Rϕ[X(f1), X(f2), . . . , X(fn)] = δn=2 〈f1, f2〉 .

c. Each X(f) has distribution S(0, ‖f‖2).

d. If f1, f2, . . . , fn are mutually orthogonal, then X(f1), X(f2), . . . , X(fn) are freely independent.Call X(f) : f ∈ HR a semicircular system, and any

X(fi) : i ∈ S, fi ⊥ fj for i 6= j

a free semicircular system.

Definition 6.9. Let a = (a1, a2, . . . , ad) ∈ (A, ϕ) be random variables.

Mean of a = vector (ϕ [a1] , . . . , ϕ [ad])t.

Covariance of a = matrix[ϕ [aiaj]− ϕ [ai]ϕ [aj] = Rϕ[ai, aj]

]di,j=1

.

Random variables are uncorrelated if their covariance is the zero matrix.

Corollary 6.10. If elements in a semicircular system are uncorrelated, they are freely independent.

Warning: semicircular variables may be uncorrelated without being freely independent.

Theorem 6.11 (Multivariate free CLT). Let ai,n : i = 1, . . . , d, n ∈ N ⊂ (A, ϕ) be random variablessuch that the d-tuples (a1,n, . . . , ad,n) are, for n ∈ N, freely independent, identically distributed, havemean 0 and covariance Q. Then

1√N

(N∑n=1

a1,n, . . .N∑n=1

ad,n

)→ (s1, . . . , sd)

in distribution, where (s1, . . . , sd) are a semicircular system with covariance Q.

59

Page 62: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Meaning of incomplete Dyck paths? Let f1, . . . , fn ∈ HR. Define the Wick product W (f1, f2, . . . , fn) ∈B(F(H)) by W (∅) = 1 and the recursion

X(f)W (f1, f2, . . . , fn) = W (f, f1, f2, . . . , fn) + 〈f1, f〉W (f2, . . . , fn),

For example, W (f) = X(f)W (∅) = X(f),

W (f1, f2) = X(f1)X(f2)− 〈f2, f1〉 .

Clearly W (f1, . . . , fn) is a polynomial in X(f1), . . . , X(fn).

Lemma 6.12.W (f1, . . . , fn)Ω = f1 ⊗ . . .⊗ fn,

and it is a unique operator in Alg∗ (X(f) : f ∈ HR) with this property.

Proof. Use the recursion. The second statement follows from the fact (which we do not prove) that thevector Ω is separating for the algebra, i.e. the only operator in it with AΩ = 0 is A = 0.

Fix f ∈ HR with ‖f‖2 = t, and denote X(f) = X . Then W (f, . . . , f) = Un(X, t) is a polynomial in X .It satisfies the recursion

XUn(X, t) = Un+1(X, t) + tUn−1(X, t), U0(X, t) = 1.

Un are Chebyshev polynomials of the second kind (with a slightly unusual normalization). Satisfy a3-term recursion, so by general theory (Theorem 6.43) are orthogonal with respect to some distribution.

Lemma 6.13. Un : n ∈ N are the monic orthogonal polynomials with respect to µ = S(0, t), that is∫RUn(x, t)Uk(x, t) dµ(x) = 0, n 6= k.

Proof. ∫RUn(x, t)Uk(x, t) dµ(x) = 〈Un(X, t)Uk(X, t)Ω,Ω〉

= 〈Uk(X, t)Ω, Un(X, t)Ω〉 =⟨f⊗k, f⊗n

⟩=

0, k 6= n,

tn, n = k.

Corollary 6.14.

Xn =∑

incomplete Dyck paths(n)

t#matched stepsU#unmatched steps(X, t)

= Un(X, t) +

(n

2

)c1tUn−2(X, t) +

(n

4

)c2t

2Un−4(X, t) + . . . .

60

Page 63: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof. Combine Proposition 6.6 with the definition of the Wick products.

What about general W (f1, f2, . . . , fn)?

Lemma 6.15. Let f1, . . . , fd be mutually orthogonal and ~u = (u(1), . . . , u(n)), u(i) ∈ [d], be such that

~u = (v(1), . . . , v(1)︸ ︷︷ ︸k(1)

, v(2), . . . , v(2)︸ ︷︷ ︸k(2)

, . . . , v(r), . . . , v(r)︸ ︷︷ ︸k(r)

)

with v(1) 6= v(2) 6= . . . 6= v(r) (neighbors different). Then

W (fu(1), fu(2), . . . fu(n)) =r∏i=1

Uk(i)

(X(fv(i)),

∥∥fv(i)

∥∥2).

It is natural to call these multivariate Chebyshev polynomials.

Proof. Using Lemma 6.12, it suffices to show that if f ⊥ g1 then

Uk(X(f), ‖f‖2)g1 ⊗ . . .⊗ gn = f⊗k ⊗ g1 ⊗ . . .⊗ gn.

Indeed, by induction, for k ≥ 1

Uk+1(X, ‖f‖2)g1 ⊗ . . .⊗ gn =(XUk

(X(f), ‖f‖2)− ‖f‖2 Uk−1

(X(f), ‖f‖2))g1 ⊗ . . .⊗ gn

= X(f)f⊗k ⊗ g1 ⊗ . . .⊗ gn − ‖f‖2 f⊗(k−1) ⊗ g1 ⊗ . . .⊗ gn= f⊗(k+1) ⊗ g1 ⊗ . . .⊗ gn.

Exercise 6.16. LetHR = Rn be a real Hilbert space,H = Cn its complexification, and

Γ0(H) = W ∗(X(f) : f ∈ HR).

Prove thatΓ0(H) ' L(FdimH).

You may use without proof the analytic facts L(Z) ' L∞(T) (proved via Fourier transforms) and, fora = a∗, W ∗(a) ' L∞(σ(a)). To be completely precise, you would also need a von Neumann algebraversion of Proposition 6.6 and Theorem 7.9 from [NS06], namely that the von Neumann algebra generatedby freely independent subalgebras is (isomorphic to) their free product.

Exercise 6.17. Exercise 8.26 in [NS06]. Fill in the details in the following argument.

For f ∈ HR, show that the operator X(f) = `(f) + `∗(f) on F(H) has, with respect to the vacuumexpectation on B(F(H)), the same distribution as the operator

X

(f ⊕ f ⊕ . . .⊕ f√

N

)= `

(f ⊕ f ⊕ . . .⊕ f√

N

)+ `∗

(f ⊕ f ⊕ . . .⊕ f√

N

)on F(HN), with respect to the vacuum expectation on B(F(HN)). HereHN = H⊕ . . .⊕H︸ ︷︷ ︸

N

. Show that

the second operator is a sum of N freely independent operators, and apply the free central limit theoremto conclude that X(f) is a semicircular element. Hint: what is

∥∥∥f⊕f⊕...⊕f√N

∥∥∥?

61

Page 64: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

6.2 Łukasiewicz paths and Voiculescu’s operator model.

Lattice paths and Fock spaces II.

Definition 6.18. Łukasiewicz paths are lattice paths starting and ending at height 0, with rising steps(1, k), k ∈ N, flat steps (1, 0), and falling steps (1,−1). We will always use weight 1 for the falling stepsand weight rk+1 for a rising step (1, k), including weight r1 for a flat step (1, 0).

Exercise 6.19. Use the idea in Figure 6 to show that Łukasiewicz paths with n steps are in a bijectionwith NC(n).

Proposition 6.20 (Voiculescu’s operator model). Let µ ∈ D(1), with free cumulants rk : k ∈ N. Let fbe a unit vector, and ` = `(f) a free creation operator on a full Fock space. Note that the only relationbetween ` and `∗ is `∗` = 1. Let

Xµ = `+∞∑k=1

rk(`∗)k−1 = `+ r1 + r2`

∗ + r3(`∗)2 + . . .

be a formal power series. Then ⟨XnµΩ,Ω

⟩= mn[µ],

where this expression depends only on finitely many terms from the series, and so is well defined. In otherwords, µ is the distribution of Xµ.

Proof.

⟨XnµΩ,Ω

⟩=

⟨(`+

∞∑k=1

rk(`∗)k−1

)n

Ω,Ω

⟩=

∑Łukasiewicz paths(n)

w(path) =∑

π∈NC(n)

∏V ∈π

r|V |.

Proposition 6.21. If f1, . . . , fk are mutually orthogonal, then Alg∗ (`(fi)) : 1 ≤ i ≤ k are freely inde-pendent with respect to the vacuum expectation.

Proof. See Proposition 7.15 in [NS06], or imitate the proof of Proposition 3.5.

Proposition 6.22. Let f1 ⊥ f2 be unit vectors, `1 = `(f1) and `2 = `(f2), so that the relation betweenthem is

`∗i `j = δi=j

Let

Xµ = `1 +∞∑k=1

rk[µ](`∗1)k−1,

Xν = `2 +∞∑k=1

rk[ν](`∗2)k−1.

62

Page 65: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Then Xµ and Xν are freely independent, and Xµ +Xν has the same distribution as

Y = l1 +∞∑k=1

(rk[µ] + rk[ν])(l∗1)k−1.

This gives an alternative proof of the fact that

rk[µ ν] = rk[µ] + rk[ν].

Proof. Free independence follows from the preceding proposition. Let

Z = Xµ +Xν = (`1 + `2) +∞∑k=1

rk[µ](`∗1)k−1 +∞∑k=1

rk[ν](`∗2)k−1

and note that

Y = `1 +∞∑k=1

rk[µ](`∗1)k−1 +∞∑k=1

rk[ν](`∗1)k−1.

We verify that for all n,〈ZnΩ,Ω〉 = 〈Y nΩ,Ω〉 ,

since in a Łukasiewicz path, a step of one type (1 or 2) can only be matched with a step of the sametype.

Can do this in the multivariate case. See [Haa97] for an alternative argument.

6.3 Motzkin paths and Schurmann’s operator model. Freely in-finitely divisible distributions.

Lattice paths and Fock spaces III. See [GSS92].

Definition 6.23. A Motzkin path is a lattice path starting and ending at height 0, with rising steps (1, 1),flat steps (1, 0), and falling steps (1,−1).

Remark 6.24. In this section, rising and falling steps have weight 1, while flat steps at positive heighthave weight 2. Equivalently, flat steps come with two possible labels, “s” and “m”, where flat steps atheight 0 can also have the s label.

Definition 6.25. Each step in a Motzkin path has a height: a rising step from k to k + 1, flat step from kto k, falling step from k to k − 1 all have height k.

For a partition π, for each k, the number of open blocks is

|V ∈ π : min(V ) < k ≤ max(V )| .

See the figure for the next lemma for an example.

63

Page 66: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Lemma 6.26. Motzkin paths with n steps with labels as in the preceding remark are in a bijection withNC(n). Under this bijection, the height of a step in the path equals the number of open blocks at thatlocation in the partition.

Proof. See Figure 6. Openers of blocks correspond to rising steps, closers to falling steps, middle ele-ments to flat steps labeled with m, singletons to flat steps labeled with s.

Remark 6.27. Under this bijection, Dyck paths correspond to NC2(n), Motzkin paths with no flat stepsat height 0 to non-crossing partitions with no singletons, and unlabeled Motzkin paths to NC1,2(n), non-crossing partitions whose blocks have at most 2 elements.

Remark 6.28. Let HR be a real Hilbert space, H its complexification, F(H) its full Fock space, andϕ = 〈·Ω,Ω〉 the vacuum expectation. Let

λ1, λ2, . . . , λd ∈ R,

ξ1, ξ2, . . . , ξd ∈ HR,

andT1, T2, . . . , Td ∈ L(HR)

be symmetric, possibly unbounded, linear operators defined on a common domain D ⊂ HR containingξi : 1 ≤ i ≤ d and invariant under all Ti. That is, for any ~u,

Tu(1)Tu(2) . . . Tu(n−1)ξu(n)

is defined. On the Fock space, we have the free creation and annihilation operators `(ξi), `∗(ξi). Define

Λ(Ti)(f1 ⊗ f2 ⊗ . . .⊗ fn) = (Tif1)⊗ f2 ⊗ . . .⊗ fn,

Λ(Ti)Ω = 0, andXi = `(ξi) + `∗(ξi) + Λ(Ti) + λi.

It is easy to check that Λ(T ) is symmetric if T is. So each Xi is symmetric. What is the joint distributionof (X1, . . . , Xd) with respect to ϕ?

Lemma 6.29.

Rϕ[Xi] = λi,

Rϕ[Xi, Xj] = 〈ξi, ξj〉 ,Rϕ[Xu(1), Xu(2), . . . , Xu(n)] =

⟨ξu(1), Tu(2) . . . Tu(n−1)ξu(n)

⟩.

Proof.

ϕ[Xu(1)Xu(2) . . . , Xu(n)

]=⟨Xu(1)Xu(2) . . . , Xu(n)Ω,Ω

⟩=

∑Motzkin paths(n)

〈w(path)Ω,Ω〉 ,

64

Page 67: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

where the rising step in position i has weight `∗(ξu(i)), the falling step `(ξu(i)), a flat “m” step weightΛ(Tu(i)), and a flat “s” step weight λu(i). For example, the term corresponding to the path/partition inFigure 6 is (moving right-to-left)

〈ξ12, T10ξ4〉λ11 〈ξ9, ξ5〉 〈ξ8, T7ξ6〉λ3 〈ξ2, ξ1〉 .

Which joint distributions arise in this way?

Definition 6.30. A distribution µ ∈ D(d) is freely (or )-infinitely divisible if for all n, µ(1/n) exists,that is, there exists νn ∈ D(d) such that νnn = µ. Note: positivity is crucial.

Proposition 6.31. The following are equivalent.

a. µ is -infinitely divisible.

b. For all t ≥ 0, all k ∈ N, and |~u| = k, t Rµ~u = Rµt

~u for some µt ∈ D(d).

c. There exists a free convolution semigroup

µt : t ≥ 0 µt µs = µt+s, µ1 = µ.

Proof. µ is -infinitely divisible if and only if there exists νn with 1nRµ~u = Rνn

~u . Denote µ1/n = νn, andfor p/q ∈ Q+, µp/q = νpq ∈ D(d) satisfies

Rµp/q~u =

p

qRµ~u.

By continuity, can extend this to real t.

Example 6.32. Recall that rn[S(0, t)] = tδn=2 and rn[πt] = t. It follows that S(0, t) : t ≥ 0 andπt : t ≥ 0 form free convolution semigroups, and in particular are -infinitely divisible. More gen-erally, for any ν ∈ D and any t ≥ 0, the corresponding free compound Poisson distribution µν,t is-infinitely divisible.

See Lecture 13 of [NS06] for a discussion of general limit theorems.

Definition 6.33. Denote

C0〈x1, . . . , xd〉 = P ∈ C〈x1, . . . , xd〉 with no constant term.

A self-adjoint linear functional ρ on C〈x〉 is conditionally positive if it is positive on C0〈x〉.

Example 6.34. On C[x], let ρ [xn] = δn=2. Then

ρ[(x2 − 1)2

]= −2,

so ρ is not positive.

65

Page 68: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Lemma 6.35. Every P ∈ C0〈x1, . . . , xd〉 is of the form

P =d∑i=1

λixi +d∑

i,j=1

xiPij(x)xj, Pij ∈ C〈x1, . . . , xd〉.

If ψ ∈ D(d), then ρ defined arbitrarily on 1 and xi, and

ρ

[d∑

i,j=1

xiPijxj

]=

d∑i=1

ψ [Pii] ,

is conditionally positive. If d = 1, the converse also holds. That is, any conditionally positive functionalon C[x] is of the form ρ [xn] = ψ [xn−2] for some positive ψ and n ≥ 2.

Proof. For P ∈ C0〈x1, . . . , xd〉,

ρ [P ∗P ] = ρ

[(d∑i=1

Pixi

)∗ d∑j=1

Pjxi

]= ρ

[d∑

i,j=1

xiP∗i Pjxj

]=

d∑i=1

ψ [P ∗i Pi] ≥ 0.

Conversely, if d = 1, ρ is conditionally positive, and ψ is defined by ψ [xn] = ρ [xn+2], then

ψ [P ∗P ] = ρ [xP ∗Px] ≥ 0.

Remark 6.36 (Schurmann’s construction). Let ρ on C0〈x1, . . . , xd〉 be conditionally positive. GNS con-struction for (C0〈x1, . . . , xd〉, ρ) produces a Hilbert space

H = (C0〈x1, . . . , xd〉/N )

with the inner product ⟨P , Q

⟩= ρ [Q∗P ] .

Note that C0〈x1, . . . , xd〉 is not unital, so there is no natural state vector for ρ. However, we can define,for i = 1, 2, . . . , d

λi = ρ[xi] ∈ R, ξi = xi ∈ H, Ti = λ(xi) ∈ L(H).

Then

ρ [xi] = λi,

ρ [xixj] = 〈ξi, ξj〉 ,ρ[xu(1)xu(2) . . . xu(n)

]=⟨ξu(1), Tu(2) . . . Tu(n−1)ξu(n)

⟩.

Note also that the conditions from Remark 6.28 are satisfied. Therefore that construction give a stateµ ∈ D(d) with

Rµ[xu(1), xu(2), . . . , xu(n)] = Rϕ[Xu(1), Xu(2), . . . , Xu(n)] = ρ[xu(1)xu(2) . . . xu(n)

].

66

Page 69: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proposition 6.37 (Schoenberg correspondence). For distributions with finite moments, µ is -infinitelydivisible if and only if Rµ is a conditionally positive functional.

Proof. ⇒. µ is part of a free convolution semigroup µt, where

µt[xu(1)xu(2) . . . xu(n)

]=

∑π∈NC(n)

t|π|Rµπ[xu(1), xu(2), . . . , xu(n)].

Then

Rµ[xu(1)xu(2) . . . xu(n)] = Rµ[xu(1), xu(2), . . . , xu(n)] =d

dt

∣∣∣∣t=0

µt[xu(1)xu(2) . . . xu(n)

].

On the other hand, if P ∗P has no constant term

d

dt

∣∣∣∣t=0

µt [P ∗P ] = limt→0+

µt [P ∗P ]− µ0 [P ∗P ]

t≥ 0

since µ0 [P ∗P ] = δ0 [P ∗P ] = (P ∗P )(0) = 0.

For the converse direction, suppose that Rµ is conditionally positive. Then tRµ is also conditionallypositive, and so by the Fock space constructions above, tRµ = Rµt for some µt ∈ D(d).

Exercise 6.38. Let (A, ψ) be an n.c. probability space, and suppose for simplicity that ψ is faithful. Onthe full Fock space F(L2(A, ψ)), for f ∈ A symmetric, define the creation and annihilation operators`(f), `∗(f) as before, and

Λ(f)(g1 ⊗ g2 ⊗ . . .⊗ gn) = (fg1)⊗ g2 ⊗ . . .⊗ gn,

Λ(f)Ω = 0. Then each Λ(f), and also

X(f) = `(f) + `∗(f) + Λ(f) + ψ [f ] ,

is symmetric. Compute〈X(f1)X(f2) . . . X(fn)Ω,Ω〉 .

Deduce that each X(f) has a free compound Poisson distribution, and if fifj = 0 for i 6= j, then X(fi)are freely independent with respect to the vacuum expectation.

Remark 6.39 (Classically infinitely divisible distributions). LetHR and the algebraic Fock spaceFalg(H)be as before. On this space, define the pre-inner product

〈f1 ⊗ . . .⊗ fn, g1 ⊗ . . .⊗ gk〉s = δn=k

∑α∈Sym(n)

n∏i=1

⟨fi, gα(i)

⟩.

For example〈f1 ⊗ f2, g1 ⊗ g2〉s = 〈f1, g1〉 〈f2, g2〉+ 〈f1, g2〉 〈f2, g1〉 .

67

Page 70: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

This is a degenerate inner product, since ‖f ⊗ g − g ⊗ f‖s = 0. Quotient out by tensors of norm 0, getthe symmetric Fock space Fs(H), on which f ⊗s g = g⊗s f . On this space, define the creation operators

a+(f)(g1 ⊗ . . .⊗ gn) = f ⊗ g1 ⊗ . . .⊗ gn,

annihilation operators

a−(f)(g1 ⊗ . . .⊗ gn) =n∑j=1

〈gi, f〉 g1 ⊗ . . .⊗ gj ⊗ . . .⊗ gn,

and preservation operators

Λ(T )(g1 ⊗ . . .⊗ gn) =n∑j=1

g1 ⊗ . . .⊗ (Tgj)⊗ . . .⊗ gn

Again,Xi = a+(ξi) + a−(ξi) + Λ(Ti) + λi

are symmetric (with respect to the new inner product). However, for ξi, Ti, λi as in Remark 6.28, theseXi’s commute. Moreover, if µ ∈ D(d) is their joint distribution, its classical cumulants are precisely

kµ[xu(1), . . . , xu(n)] =⟨ξu(1), Tu(2) . . . Tu(n−1)ξu(n)

⟩Proposition 6.40. µ ∈ D(d) is ∗-infinitely divisible if and only if µ comes from (λi, ξi, Ti)

di=1 on a sym-

metric Fock space, if and only if kµ is a conditionally positive functional.

Corollary 6.41. The Bercovici-Pata bijection is the bijection

BP : ∗ − infinitely divisible distributions ↔ − infinitely divisible distributions ,

wherekµ = RBP[µ].

This is a bijection since both sets kµ : µ ∗ −ID and Rµ : µ−ID consist precisely of all condi-tionally positive functionals. Moreover, this map is a homomorphism with respect to the convolutionoperations,

BP [µ ∗ ν] = BP [µ] BP [ν],

and homeomorphism with respect to convergence in moments.

This homomorphism property does not extend to D(d), even for d = 1.

Exercise 6.42 (Boolean infinite divisibility). Let H and (λi, ξi, Ti)di=1 be as before. The Boolean Fock

space is simply CΩ⊕H, with Boolean creation operators

a+(f)Ω = f, a+(f)g = 0,

68

Page 71: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

and annihilation operatorsa−(f)Ω = 0, a−(f)g = 〈g, f〉Ω.

Extend T ∈ L(H) to the Boolean Fock space by TΩ = 0. Finally, denote PΩ the projection onto Ω,PΩΩ = Ω, PΩg = 0. Define

Xi = a+(ξi) + a−(ξi) + Ti + λiPΩ.

a. Show that the joint distribution of (X1, . . . , Xd) with respect to the vacuum expectation is ]-infinitely divisible.

Hint: in the free case, the argument goes as follows. Via the Schoenberg correspondence, freeinfinite divisibility is equivalent to the conditional positivity of the free cumulant functional. Thisin turn is equivalent to a representation on a free Fock space coming from the data (λi, ξi, Ti)

di=1:

one direction is Schurmann’s construction, the other follows from the existence of the Fock spaceconstructions for all re-scalings of this data by positive t.

b. Let µ ∈ D(d) be arbitrary. Using the GNS construction, we can represent the n.c. probability space(C〈x1, . . . , xd〉, µ) on a Hilbert space K with µ = 〈·Ω,Ω〉. LetH = K CΩ,

λi = 〈λ(xi)Ω,Ω〉 ∈ C,

ξi = (λ(xi)− λiPΩ)Ω ∈ K,Ti = λ(xi)− a+(ξi)− a−(ξi)− λiPΩ ∈ L(K).

Show that these satisfy all the conditions in Remark 6.28 and in part (a) (on H, not on K). Inparticular, show that ξi ∈ H and Ti maps H to itself. Show that on the Boolean Fock space ofH, CΩ ⊕H ' K, the joint distribution of X1, . . . , Xd arising from this family (λi, ξi, Ti)

di=1, with

respect to the vacuum state, is µ. Conclude that any distribution is infinitely divisible in the Booleansense.

6.4 Motzkin paths and orthogonal polynomials.

Lattice paths IV.

Let β0, β1, β2, . . . ⊂ R and γ1, γ2, γ3, . . . ⊂ [0,∞). Consider Motzkin paths with the followingweights: rising steps have weight 1, flat steps at height n have weight βn, falling steps at height n haveweight γn. Also fix γ0 = 1. See Figure 6.

LetH = C. Then each C⊗n ' C, so by denoting

1⊗ 1⊗ . . .⊗ 1︸ ︷︷ ︸n

= en,

e0 = Ω, we identifyF(C) = Span (en : n ≥ 0).

69

Page 72: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Choose the inner product so that en : n ≥ 0 form an orthogonal basis, with ‖en‖2 = γnγn−1 . . . γ1,‖e0‖ = 1. Define

a+en = en+1,

a−en = γnen−1, a−e0 = 0,

a0en = βnen.

Then (a+)∗ = a−, a0 is symmetric, and

X = a+ + a− + a0

is also symmetric, with some distribution µ,

〈Xne0, e0〉 =

∫Rxn dµ(x).

Theorem 6.43 (Darboux, Chebyshev, Stieltjes, Viennot, Flajolet, etc.).

a. See Figure 6.

m1[µ] = β0, m2[µ] = β20 + γ1, m3[µ] = β3

0 + 2β0γ1 + β1γ1,

and in generalmn[µ] =

∑Motzkin paths(n)

w(path).

b. The moment generating function of µ has a continued fraction expansion

M(z) =∞∑n=0

mn[µ]zn =1

1− β0z −γ1z

2

1− β1z −γ2z

2

1− β2z −γ3z

2

1− . . .

c. Any measure µ all of whose moments are finite arises in this way. The corresponding pair ofsequences

J(µ) =

(β0 β1 β2 . . .γ1 γ2 γ3 . . .

)are the Jacobi parameters of µ.

d. Let Pn(x) : n ≥ 0 be the monic orthogonal polynomials with respect to µ, that is Pn(x) = xn+. . .and ∫

RPn(x)Pk(x) dµ(x) = 0, n 6= k

70

Page 73: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Then these polynomials satisfy the three-term recursion

xPn(x) = Pn+1(x) + βnPn(x) + γnPn−1(x),

where by convention P0(x) = 1, P−1 = 0.

Proof.

a. Clear from the Fock space model.

b. Figure omitted. We will show by induction that the continued fraction up to γn expands to the sumover Motzkin paths of height at most n, plus higher order terms. Indeed, taking an extra level in thecontinued fraction corresponds to replacing each γn with γn times the sum over Motzkin paths ofheight at most 1, with labels βn, γn+1. But inserting such paths between each rise at height n andthe corresponding fall is exactly how one goes from a path of height n to a path of height n+ 1.

c. It suffices to show how to compute the Jacobi parameters from the moments. Indeed,

γ1γ2 . . . γn = m2n − Poly(γ1, . . . , γn−1, β0, . . . , βn−1),

γ1γ2 . . . γnβn = m2n+1 − Poly(γ1, . . . , γn, β0, . . . , βn−1),

d. Note that by assumption,〈Pn(X)e0, Pk(X)e0〉 = 0, n 6= k

andPn(X)e0 ∈ Span (ei : 0 ≤ i ≤ n)

It follows that for each n, Pi(X)e0 : 0 ≤ i ≤ n is a basis for Span (ei : 0 ≤ i ≤ n). Moreover,Pn(X)e0 − en ∈ Span (ei : 0 ≤ i ≤ n− 1), which implies that in fact Pn(X)e0 = en. But

XPn(x)e0 = Xen = en+1 + βnen + γnen−1 = Pn+1(X)e0 + βnPn(X)e0 + γnPn−1(X)e0.

Example 6.44. Let βn = 0 and γn = t for all n. Thus

J(µt) =

(0 0 0 . . .t t t . . .

)Then

mn[µt] =∑

Dyck paths(n)

tn/2,

and µt = S(0, t). The orthogonal polynomials are the Chebyshev polynomials of the second kind.

71

Page 74: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 6.45. Let βn = βt and γn = γt for all n. Thus

J(µt) =

(βt βt βt . . .γt γt γt . . .

)Then

mn[µt] =∑

Motzkin paths(n)

(γt)#falling steps(βt)#flat steps =∑

π∈NC1,2(n)

Rπ[µt],

where r1[µt] = βt, r2[µt] = γt, rn[µt] = 0 for n > 2. Thus µt = S(βt, γt) = S(β, γ)t.

Example 6.46. Let β0 = βt, βn = βt+ b, γn = γt, so that

J(µt) =

(βt βt+ b βt+ b . . .γt γt γt . . .

)Then

mn[µt] =∑

labeled Motzkin paths

(γt)#falling steps(βt)#singleton flat stepsb#middle flat steps =∑

π∈NC(n)

Rπ[µt],

where r1[µt] = βt, rn[µt] = γbn−2t. Thus µt = µt, where r1[µ] = β, rn[µ] = γbn−2. µ is a shifted,scaled free Poisson distribution:

rn[Dαν δa] =

a+ r1[ν], n = 1,

αnrn[ν], n > 1.

Definition 6.47. Let β, b, c ∈ R and γ ≥ 0. The free Meixner distributions µt with these parametersare determined by their Jacobi parameters

J(µt) =

(βt βt+ b βt+ b . . .γt γt+ c γt+ c . . .

)Thus either c ≥ 0, or c < 0 and t ≥ −c/γ.

Proposition 6.48. Whenever µs, µt are positive, then µs µt = µs+t. In particular, if c ≥ 0, then µt is-infinitely divisible. If c < 0, µ = µ−c/γ is not -infinitely divisible, but µt is positive for all t ≥ 1.

Proof. See Figure 6. Take a Motzkin path with labels βt, βt + b, γt, γt + c. Separate it into Motzkinpaths with labels βt on flat steps, b on flat steps at height at least 1, γt on falling steps, c on falling stepsat height at least 1. Form a new labeled Motzkin path by

βt-flat 7→ s-flat , γt-fall 7→ fall, matching rise 7→ rise,

b or c steps 7→ m-flat steps.

72

Page 75: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

This corresponds to a π ∈ NC(n). On each non-singleton class of π, have a further (b, c)-Motzkin path,starting with a γt-rise. Thus

mn[µt] =∑

π∈NC(n)

(βt)|Sing(π)|∏

V ∈π,|V |≥2

(γt)∑

(b,c)−Motzkin paths on |V |−2

w(path)

=∑

π∈NC(n)

(βt)|Sing(π)|∏

V ∈π,|V |≥2

(γt)m|V |−2(S(b, c))

andr1[µt] = βt, rn[µt] = γtmn−2(S(b, c)).

Free Meixner distributions include the semicircular, free Poisson, arcsine, Kesten-McKay, Bernoulli andfree binomial distributions.

Remark 6.49. Let β = b = 0, γ = 1, c = −1. Thus

J(µt) =

(0 0 0 . . .t t− 1 t− 1 . . .

)For t = 1,

J(µt) =

(0 0 0 . . .1 0 0 . . .

)so that

Mµ(z) =1

1− z2

and

Gµ(z) =1

z

1

1− z−2=

z

z2 − 1=

1

2

(1

z − 1− 1

z + 1

).

Soµ =

1

2(δ−1 + δ1)

is the symmetric Bernoulli distribution. Easy to see: µ∗t is defined only for t ∈ N. But µt is defined forall t ≥ 1! Compare with the Bercovici-Pata bijection.

Exercise 6.50.

a. If

J(µ) =

(β0 β1 β2 . . .γ1 γ2 γ3 . . .

),

show that its Boolean cumulants are B1[µ] = β0, Bn+2[µ] = γ1mn[ν], where

J(ν) =

(β1 β2 . . .γ2 γ3 . . .

)The map µ 7→ ν is the coefficient stripping.

b. Conclude that

J(µ]t) =

(β0t β1 β2 . . .γ1t γ2 γ3 . . .

),

73

Page 76: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

6.5 q-deformed free probability.

See [BS91], [Ans01].

Definition 6.51. Let q be a number or a symbol. The q-integers are

[n]q = 1 + q + q2 + . . .+ qn−1 =1− qn

1− qThe notion goes back to Euler (≈ 1750), Rogers (1894), Ramanujan (1913).

Example 6.52. [n]1 = n,

[n]0 =

0, n = 0,

1, n ≥ 1.

Consider Dyck paths with weights 1 on rising steps, and weight [n]q = 1 + q + . . . + qn−1 on a fallingstep at height n.

From general theory, ifmn[µ(x|q)] =

∑Dyck paths

w(path),

then µ(x|q) is the orthogonality measure for the polynomials Pn(x|q) which are defined via the recur-sion

xPn(x|q) = Pn+1(x|q) + [n]qPn−1(x|q).These are the continuous (Rogers) q-Hermite polynomials. For q = 1, they are the Hermite polynomials.For q = 0, they are the Chebyshev polynomials of the 2nd kind.

Have explicit formulas for µ(x|q), Pn(x|q). µ(x|q) is the q-Gaussian distribution.

Equivalently,mn[µ(x|q)] =

∑labeled Dyck paths

w(path),

where each falling step at height n may have one of the labels 1, q, . . . , qn−1. Using this idea, on Falg(H),define a+(f) = `(f) but

a−(f)(g1 ⊗ . . .⊗ gn) =n∑i=1

qi−1 〈fi, g〉 g1 ⊗ . . .⊗ gi ⊗ . . .⊗ gn.

Note that on the full Fock space F(H), these are not adjoints.

Definition 6.53. On the algebraic full Fock space, define the q-inner product

〈f1 ⊗ . . .⊗ fn, g1 ⊗ . . .⊗ gk〉q = δn=k

∑α∈Sym(n)

qinv(α)

n∏i=1

⟨fi, gα(i)

⟩,

where inv (α) is the number of inversions in the permutation α,

inv (α) = |(i, j) : 1 ≤ i < j ≤ n, α(i) > α(j)|

74

Page 77: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Theorem 6.54 (Bozejko, Speicher 1991; Zagier 1992). The inner product 〈·, ·〉q is positive definite for−1 < q < 1. It is positive semi-definite for q = −1, 1.

Proposition 6.55. Let Fq(H) = Falg(H)‖·‖q . On this space, (a+(f))∗ = a−(f), and so

X(f) = a+(f) + a−(f)

is symmetric.

Proposition 6.56 (Wick formula). Let f be a unit vector. The distribution of X(f) is the q-Gaussiandistribution. Its moments are⟨

X(f)2nΩ,Ω⟩

=∑

labeled Dyck paths

w(path) =∑

π∈P2(2n)

qcr(π),

where for a general π ∈ P(n), cr (π) is the number of reduced crossings of the partition π,

cr (π) =∣∣∣(i < j < k < l) : i

π∼ k, jπ∼ l, both consecutive in their blocks.

∣∣∣Example 6.57. cr ((1, 3, 6)(2, 4)(5, 7)) = 3. See Figure 6.

Proof. See Figure 6. To show: a bijection betweenP2(2n) and labeled Dyck paths with 2n steps. Openerscorrespond to rising steps, closers to falling steps. Note that for each i, the number of open blocks beforei is equal to the height of the path on step i. Thus if on the i’th step, we close the j’th open block, thatstep is labeled by qj−1.

Corollary 6.58. ∑α∈Sym(n)

qinv(α) = [n]q! = [n]q[n− 1]q . . . [1]q.

Proof. For ‖f‖ = 1, ∑α∈Sym(n)

qinv(α) =⟨f⊗n, f⊗n

⟩q

=⟨(a−(f))nf⊗n,Ω

⟩q.

Definition 6.59. For a measure µ, its q-cumulants are defined via

mn[µ] =∑

π∈P(n)

qcr(π)∏V ∈π

r(q)|V |.

Note: r(0)n = rn, r(1)

n = kn.

Theorem 6.60. If ρ is conditionally positive, there exists a positive measure µ such that

r(q)n+2[µ] = ρ [xn] .

Denote all measures arising in this way by IDq.

75

Page 78: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Corollary 6.61. If define µq ν by

r(q)n [µq ν] = r(q)

n [µ] + r(q)[ν],

then q is a well-defined operation on IDq.

Question 6.62. Is q a well-defined operation on D(1)?

Remark 6.63. As in Exercise 6.16, let

Γq(H) = W ∗(X(f) : f ∈ HR),

where X(f) are now operators defined in Proposition 6.55. It is a spectacular result of (Guionnet,Shlyakhtenko 2012), based on previous work (Dabrowski 2010), that Γq(H) ' L(FdimH) for sufficientlysmall q (depending on dimH).

76

Page 79: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 7

Free Levy processes.

This chapter was omitted in the course, so what follows is only an outline. See [GSS92].

Definition 7.1. In (A, ϕ), a family of random variables X(t) : t ≥ 0 is a free Levy process if

a. It has free increments: for all 0 < t1, < t2 < . . . < tn,

X(t1), X(t2)−X(t1), . . . , X(tn)−X(tn−1)

are free.

b. It has stationary increments: for s < t,

µX(t)−X(s) = µX(t−s).

Assuming X(0) ∼ δ0 and denoting µX(t) = µt, it follows that µt : t ≥ 0 form a -semigroup.

c. The semigroup of distributions is continuous at t = 0.

Remark 7.2. Can construct such processes as injective limits of free products. More explicitly, start witha -semigroup µt : t ≥ 0. Let ρ = Rµ, Rµt = tρ. Let H be the Hilbert space and (λ, ξ, T ) the triplefrom Remark 6.28. Let K = H⊗ L2[0,∞). For I ⊂ [0,∞) a finite half-open interval, denote

λI = λ |I| , ξI = ξ ⊗ 1I , TI(ζ ⊗ f) = (Tζ)⊗ (1If).

On F(H⊗ L2[0,∞)), letX(I) = l(ξI) + l∗(ξI) + Λ(TI) + λ |I| .

Then whenever I1, I2, . . . , In are disjoint, X(I1), X(I2), . . . , X(In) are free, and µX(I) = µ|I|. So getX(t) = X(1[0,t)) : t ≥ 0

.

More generally, can construct X(f) : f ∈ L2[0,∞) ∩ L∞[0,∞). For the free Brownian motion, onlyneed f ∈ L2[0,∞).

77

Page 80: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Markov property.

See [Bia98].

For free Levy processes, in particular for the free Brownian motion, have free stochastic calculus etc.Here we only show that such a process has the Markov property.

Definition 7.3. Let (A, ϕ) be a C∗-probability space, B ⊂ A a C∗-subalgebra. The map ϕ [·|B] : A → Bis a conditional expectation if it is self-adjoint and for any a ∈ A and b1, b2 ∈ B,

ϕ [b1ab2|B] = b1ϕ [a|B] b2.

It does not always exist. If ϕ is a trace, can choose a conditional expectation so that moreover ϕ [ϕ [a|B]] =ϕ [a]. Then

ϕ [ϕ [a|B] b] = ϕ [ϕ [ab|B]] = ϕ [ab] .

Thus ϕ [a|B] ∈ B and 〈b, ϕ [a|B]〉ϕ = 〈b, a〉ϕ. For ϕ faithful, ϕ-preserving conditional expectation isunique. Again, it may not exist.

Definition 7.4. A process X(t) : t ≥ 0 is a Markov process if

ϕ [f(X(t))|C∗(X(r), r ≤ s)] ∈ C∗(X(s)).

Theorem 7.5.

a. A free Levy process is a Markov process.

b. For t ≥ s,

ϕ

[1

1−X(t)z +Rµt(z)

∣∣∣∣ C∗(X(r) : r ≤ s)

]=

1

1−X(s)z +Rµs(z).

That is, t 7→ 11−X(t)z+Rµt (z)

is a martingale. Need to be more precise on what type of object z is.

There are deep conceptual reasons for the Markov property of a process with freely independent incre-ments discovered by Voiculescu.

Free Appell polynomials.

See [Ans04].

Definition 7.6. The linear map

∂xi : C〈x1, . . . , xd〉 → C〈x1, . . . , xd〉 ⊗ C〈x1, . . . , xd〉,

78

Page 81: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

defined by∂xi(xu(1)xu(2) . . . xu(n)) =

∑u(j)=i

xu(1) . . . xu(j−1) ⊗ xu(j+1) . . . xu(n)

is the free difference quotient. It is so named because in one variable,

∂xxn =

n−1∑i=0

xi ⊗ xn−i−1 ↔n−1∑i=0

xiyn−i−1 =xn − yn

x− y,

and so

∂xp(x) =p(x)− p(y)

x− y.

Definition 7.7. In an n.c. probability space (A, ϕ), define the free Appell polynomials to be multilinearmaps A(X1, X2, . . . , Xn) from A×A× . . .×A to A such that

a. Each A(X1, X2, . . . , Xn) is a polynomial in X1, . . . , Xn.

b. A(∅) = 1 and

∂XiA(X1, X2, . . . , Xn) = A(X1, . . . , Xi−1)⊗ A(Xi+1, . . . , Xn).

c. ϕ [A(X1, X2, . . . , Xn)] = 0 for all n ≥ 1.

For A = C[x], these are ordinary polynomials An(x), such that ∂xAn(x) =∑Ai(x) ⊗ An−i−1(x) and∫

An(x) dµ(x) = 0 for n ≥ 1.

Theorem 7.8.

a.∞∑n=0

An(x)zn =1

1− xz +Rµ(z),

and a similar result holds in the multivariate case.

b.

X0A(X1, X2, . . . , Xn) = A(X0, X1, . . . , Xn) +n∑j=1

R[X0, X1, . . . , Xj]A(Xj+1, . . . , Xn).

c. If A1, . . . ,Ad ⊂ A are free,

X1,1, X1,2, . . . , X1,u(1) ∈ Av(1), . . . , Xk,1, Xk,2, . . . , Xk,u(k) ∈ Av(k)

for v(1) 6= v(2) 6= . . ., then

A(X1,1, . . . , X1,u(1), X2,1, . . . , X2,u(2), . . . , Xk,1, . . . , Xku(k))

= A(X1,1, . . . , X1,u(1))A(X2,1, . . . , X2,u(2)) . . . A(Xk,1, . . . , Xku(k)).

79

Page 82: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof. For (a), if F (x, z) =∑

nAn(x)zn, then µ[F ] = 1 and

∂xF (x, z) =∑n,i

Ai ⊗ An−1−izn = zF ⊗ F.

These conditions uniquely determine F (x, z) = 11−xz+Rµ(z)

. The proof of part (b) uses a similar technique.For part (c), we show that the factored polynomials also satisfy the initial conditions and the recursionfrom part (b).

Proposition 7.9. In a tracial C∗-probability space (A, ϕ), if B ⊂ A, Y ∈ B, and X is free from B, then

ϕ

A(X + Y,X + Y, . . . , X + Y )︸ ︷︷ ︸n

|B

= A(Y, Y, . . . , Y ).

Proof. ϕ is tracial, so the corresponding conditional expectation exists. For any b ∈ B,

ϕ [A(X + Y, . . . , X + Y )b] = ϕ[∑

A(X or Y, . . . , X or Y )b]

Using the preceding theorem, freeness, and the fact that the free Appell polynomials are centered, wecontinue the expression as

=∑

ϕ [A(X, . . . , X)A(Y, . . . , Y )A(X, . . . , X) . . . A(X or Y )b] = ϕ [A(Y, Y, . . . , Y )b] .

The result follows from the second characterization of conditional expectations.

Proof of Theorem 7.5. We are interested in ϕ [P (X(t))|C∗(X(r), r ≤ s)]. Since they span the space, itsuffices to show this for P = An. In that case,

ϕ [An(X(t))|C∗(X(r), r ≤ s)] = ϕ [An(X(s) + (X(t)−X(s)))|C∗(X(r), r ≤ s)] = An(X(s))

by the preceding proposition and free independence of increments.

80

Page 83: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 8

Free multiplicative convolution and theR-transform.

See Lectures 12, 14 of [NS06]. In the preceding chapters, we tried to emphasize the similarity and parallelsbetween classical and free probability, through the use of all/non-crossing partitions, symmetric/full Fockspace, and infinitely divisible distributions. By contrast, the phenomena described in this and the followingchapters are special to free probability.

Remark 8.1. Recall: if a, b are free, µab = µa µb, the free multiplicative convolution. Note that ifa1, . . . , an and b1, . . . , bn are free,

ϕ [a1b1a2b2 . . . anbn] =∑

π∈NC(2n)

Rπ[a1, b1, a2, b2, . . . , an, bn]

=∑

σ∈NC(n)

Rσ[a1, a2, . . . , an]∑

τ∈NC(n)σ∪τ∈NC(2n)

Rτ [b1, b2, . . . , bn]

=∑

σ∈NC(n)

Rσ[a1, a2, . . . , an]∑

τ∈NC(n)τ≤K[σ]

Rτ [b1, b2, . . . , bn]

=∑

σ∈NC(n)

Rσ[a1, a2, . . . , an]MK[σ][b1, b2, . . . , bn].

In particular, mn[µab] =∑

σ∈NC(n) Rσ[a]MK[σ][b].

Compound Poisson realization.

Recall that if s is a semicircular variable, then s2 has the free Poisson distribution, all of whose freecumulants are 1.

81

Page 84: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Lemma 8.2. Let a1, . . . , ak be self-adjoint, orthogonal random variables in a n.c. probability space(A, τ), in the sense that aiaj = 0 for i 6= j. Let s be a semicircular variable free from ai : 1 ≤ i ≤ k.Then

sa1s, sa2s, . . . , saksare freely independent and have free compound Poisson distributions.

See Example 12.19 in [NS06] for the proof in the non-tracial case.

Proof in the tracial case.

M [sau(1)s, sau(2)s, . . . , sau(n)s] = τ[sau(1)ssau(2)s . . . sau(n)s

]= τ

[au(1)s

2au(2)s2 . . . au(n)s

2]

=∑

π∈NC(n)

Mπ[au(1), au(2), . . . , au(n)]RK[π][s2, s2, . . . , s2]

=∑

π∈NC(n)

Mπ[au(1), au(2), . . . , au(n)].

It follows that

R[sau(1)s, sau(2)s, . . . , sau(n)s] = M [au(1), au(2), . . . , au(n)] = τ[au(1)au(2) . . . au(n)

]= 0

unless all u(1) = u(2) = . . . = u(n). Thus sa1s, sa2s, . . . , saks are free. Moreover,

rn[sais] = R[sais, sais, . . . , sais] = M [ai, ai, . . . , ai] = mn[ai],

so sais has a free compound Poisson distribution.

Example 8.3. Let A = C∗(s) ∗ L∞[0, 1], with the free product state τ = ϕ ∗∫ 1

0· dx. Then the map

t 7→ s1[0,t)s

is a free Poisson process for t ∈ [0, 1]. Indeed, for I ∩ J = ∅, 1I1J = 0, so the process has freelyindependent increments, and

rk[s1Is] = mk[1I ] = |I|so s1Is has distribution π|I|.

Free compression.

Proposition 8.4. Let a1, . . . , ad be in a n.c. probability space (A, τ). Choose p freely independent fromthem, where p is a projection, p = p2 = p∗. Denote α = τ [p]. Define a new (compressed) n.c. probabilityspace

(A, τ) = (pAp, 1

ατ |pAp).

ThenRτ [pau(1)p, pau(2)p, . . . , pau(n)p] =

1

αRτ [α au(1), α au(2), . . . , α au(n)].

82

Page 85: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

See Theorem 14.10 in [NS06] for the proof in the non-tracial case.

Proof in the tracial case.

M τ [pau(1)p, pau(2)p, . . . , pau(n)p] =1

αM τ [au(1)p, au(2)p, . . . , au(n)p]

=1

α

∑π∈NC(n)

Rτπ[au(1), au(2), . . . , au(n)]MK[π][p, p, . . . , p]

=1

α

∑π∈NC(n)

Rτπ[au(1), au(2), . . . , au(n)]α

n+1−|π|

=∑

π∈NC(n)

(1

α

)|π|Rτπ[α au(1), α au(2), . . . , α au(n)].

The conclusion follows.

Corollary 8.5. For arbitrary µ ∈ D(d) and arbitrary t ≥ 1, the expression µt is well defined, in thesense that there exists µt ∈ D(d) with Rµt = tRµ.

Proof. Choose a1, . . . , ad and p as in the proposition so that τa1,...,ad = Dtµ and τ [p] = 1t. Then in

(pAp, τ), (pa1p, pa2p, . . . , padp) has distribution µt.

Corollary 8.6. The Bercovici-Pata bijection ID∗ → ID cannot be extended to general measures.

Proof. Such an extension T would in particular have the property that T−1[T [µ]t] = µ∗t. However,taking µ to be the Bernoulli distribution, the right-hand-side is defined only for integer t, while the left-hand-side is defined for any t ≥ 1.

Exercise 8.7. Let p, q ∈ (A, τ) be freely independent projections in a (for convenience) tracial n.c.probability space, with τ [p] = α, τ [q] = β. Note that the distributions of pq, pqp, and qpq are all thesame. Use the results above, and either the R-transforms or an extension of Remark 6.49 to computethe distribution of pq. The computation of the moment generating function or the Cauchy transform issufficient, you do not need to find the actual measure. See Example 13.5 for the answer.

Outline of the argument: for p as above and any a free from it, in the notation of Proposition 8.4

µτpap = (µταa)(1/α) .

On the other hand,µpap = µτpap = (1− α)δ0 + αµτpap.

Thusµpap = (1− α)δ0 + αµ(1/α)

αa

83

Page 86: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

andGpap(z) = (1− α)

1

z+ αG

µ(1/α)αa

(z).

Finally,

Rµ(1/α)αa

(z) =1

αRαa(z) = Ra(αz).

Using these results, we compute Gq, then Rq, then Rµ(1/α)αq

, then Gµ(1/α)αq

, and finally Gpqp.

See Lecture 14 of [NS06] for related results about compression by matrix units, very useful in applicationsof free probability to the study of operator algebras.

84

Page 87: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 9

Belinschi-Nica evolution.

See [BN09].

Remark 9.1. Recall that

µ -infinitely divisible ⇔ Rµ = ρ is conditionally positive,

whileµ ] -infinitely divisible ⇔ Bµ = ρ is conditionally positive,

and any µ is ]-infinitely divisible. So we can define a map

B : D(d)→ ID

via RB[µ] = Bµ. This is the Boolean Bercovici-Pata bijection.

Relation between free and Boolean cumulants.

RecallM [a1, a2, . . . , an] =

∑π∈NC(n)

Rπ[a1, a2, . . . , an]

whileM [a1, a2, . . . , an] =

∑π∈Int(n)

Bπ[a1, a2, . . . , an]

Corollary 9.2. DenoteNC(n) =

π ∈ NC(n) : 1

π∼ n.

ThenB[a1, a2, . . . , an] =

∑π∈NC(n)

Rπ[a1, a2, . . . , an]

85

Page 88: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

86

Page 89: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof. Figure omitted. Group the blocks of π ∈ NC(n) according to the smallest σ ∈ Int(n) such thatπ ≤ σ. Then the restriction of π to each block of σ connects the minimum and the maximum of thatblock.

Example 9.3. See Figure 9.

B[a1, a2, a3, a4] = R[a1, a2, a3, a4] +R[a1, a2, a4]R[a3] +R[a1, a3, a4]R[a2]

+R[a1, a4]R[a2, a3] +R[a1, a4]R[a2]R[a3].

Corollary 9.4.

BB[µ][xu(1), xu(2), . . . , xu(n)] =∑

π∈NC(n)

RB[µ]π [xu(1), xu(2), . . . , xu(n)] =

∑π∈NC(n)

Bµπ [xu(1), xu(2), . . . , xu(n)]

Definition 9.5. On NC(n), define a new partial order:

σ π ⇔ σ ≤ π and ∀V ∈ π,min(V )σ∼ max(V ).

Example 9.6.NC(n) =

σ ∈ NC(n) : σ 1n

.

Note also that if σ π and V ∈ π, then σ|V ∈ NC(V ). For another example, see Figure 9.

Definition 9.7. Denote by Out(σ) the outer blocks of σ. If σ ∈ NC(n), it has a unique outer block o(σ).Inn(σ) are the inner blocks of σ.

Proposition 9.8.

a. For π ∈ NC(n),

σ ∈ NC(n) : σ π ' σ ∈ NC(n) : π0 ≤ σ ≤ π = [π0, π] ⊂ NC(n).

b. For σ ∈ NC(n),π ∈ NC(n) : σ π ' S ⊂ σ : Out(σ) ⊂ S .

The isomorphism is implemented by the map

π = (V1, V2, . . . , Vk) 7→ o(σ|V1), o(σ|V2), . . . , o(σ|Vk) .

In particular, for σ ∈ NC(n), π ∈ NC(n) : σ π ' S ⊂ σ : o(σ) ∈ S.

Proof. See Figure 9 for an illustration. For part (a), let π = (V1, V2, . . . , Vk) and

π0 = (min(V1),max(V1)), (min(V2),max(V2)), . . . , singletons .

π0 ∈ NC1,2(n). Then π0 π and σ π if and only if π0 ≤ σ ≤ π.

For part (b), we need to show that the map is a bijection. We construct the inverse map as follows. LetOut(σ) ⊂ S ⊂ σ. For V ∈ σ, choose U ∈ S with the largest depth such that V ≤ U (see the illustration).Since S contains the outer blocks, U exists. In π, adjoin V to U . Get π ∈ NC(n), σ ≤ π, σ π, eachblock of π contains a unique element of S.

87

Page 90: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Corollary 9.9. ∑π∈NC(n)σπ

t|π|−|Out(σ)|s|σ|−|π| = (t+ s)|Inn(σ)|.

In particular, if σ ∈ NC(n), then ∑π∈NC(n)σπ

t|π|−1s|σ|−|π| = (t+ s)|σ|−1.

Proof. ∑π∈NC(n)σπ

t|π|−|Out(σ)|s|σ|−|π| =∑

Out(σ)⊂S⊂σ

t|S|−|Out(σ)|s|σ|−|S|

=

|Inn(σ)|∑i=0

(|Inn(σ)|

i

)tis|σ|−i = (t+ s)|Inn(σ)|.

Bt transformation.

Definition 9.10. For t ∈ R, define the linear map (Belinschi-Nica remarkable transformation)

Bt : D(d)→ C〈x1, . . . , xd〉∗

byBBt[µ][xu(1), xu(2), . . . , xu(n)] =

∑π∈NC(n)

t|π|−1Bµπ [xu(1), xu(2), . . . , xu(n)].

Remark 9.11. B1 = B, while B0 is the identity transformation.

Proposition 9.12. For t ≥ 0,

Bt[µ] =(µ(1+t)

)] 11+t .

In particular, the Boolean Bercovici-Pata bijection is given by the formula B[µ] =(µ2)] 1

2 .

Corollary 9.13. For t ≥ 0, Bt maps D(d) to itself, and so preserves positivity.

Proof of the Proposition.

BBt[µ]](1+t) = (1 + t)BBt[µ] (by definition of ])

= (1 + t)∑

π∈NC(n)

t|π|−1Bµπ (by definition of Bt)

= (1 + t)∑

π∈NC(n)

t|π|−1∏V ∈π

∑σi∈NC(Vi)

Rµσi

(by Corollary 9.2).

88

Page 91: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Letσ = σ1 ∪ σ2 ∪ . . . ∪ σ|π|.

Then σ ≤ π, σ π, and σ ∈ NC(n). So we can continue the expression above as

= (1 + t)∑

σ∈NC(n)

∑σπ

t|π|−1Rµσ

= (1 + t)∑

σ∈NC(n)

(1 + t)|σ|−1Rµσ (by Corollary 9.9)

=∑

σ∈NC(n)

(1 + t)|σ|Rµσ

=∑

σ∈NC(n)

Rµ(1+t)

σ (by definition of )

= Bµ(1+t)

(by Corollary 9.2).

We conclude thatBt[µ]](1+t) = µ(1+t).

Proposition 9.14. Bt : t ∈ R form a group,

Bt Bs = Bt+s,

with the identity element B0 and B−1t = B−t. In particular, Bt : t ≥ 0 form a semigroup.

Remark 9.15. It follows that(((µ(1+s)

)] 11+s

)(1+t))] 1

1+t

=(µ(1+t+s)

)] 11+t+s ,

so there is a commutation relation between the Boolean and free convolution powers.

Proof of the proposition.

B(BtBs)[µ] =∑

π∈NC(n)

t|π|−1BBs[µ]π (by definition of Bt)

=∑

π∈NC(n)

t|π|−1∏Vi∈π

∑σi∈NC(Vi)

s|σi|−1Bµσi

(by definition of Bs)

=∑

σ∈NC(n)

∑σπ

t|π|−1s|σ|−|π|Bµσ (by definition of )

=∑

σ∈NC(n)

(t+ s)|σ|−1Bµσ (by Corollary 9.9)

= BBt+s[µ] (by definition of Bt+s).

89

Page 92: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Exercise 9.16. Prove that we could also have defined the same Bt in terms of the free cumulants via

RBt[µ][xu(1), xu(2), . . . , xu(n)] =∑

π∈NC(n)

t|π|−1Rµπ[xu(1), xu(2), . . . , xu(n)].

(Combine Remark 9.1, Definition 9.10, Remark 9.11, and Proposition 9.14.)

Φ transformation.

Definition 9.17. Define the linear map

Φ : D(d)→ C〈x1, . . . , xd〉∗

by

BΦ[µ](z1, z2, . . . , zd) =d∑i=1

ziMµ(z1, z2, . . . , zd)zi.

Equivalently, BΦ[µ][xi] = 0, and

BΦ[µ][xu(1), xu(2), . . . , xu(n)] = δu(1)=u(n)Mµ[xu(2), . . . , xu(n−1)].

Remark 9.18. Recall that in one variable,

Mµ(z) =1

1− β0z −γ0z

2

1− β1z −γ1z

2

1− β2z −γ2z

2

1− . . .Thus

BΦ[µ](z) =z2

1− β0z −γ0z

2

1− β1z −γ1z

2

1− β2z −γ2z

2

1− . . .and so

MΦ[µ](z) =1

1−BΦ[µ](z)=

1

1−z2

1− β0z −γ0z

2

1− β1z −γ1z

2

1− β2z −γ2z

2

1− . . .

90

Page 93: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

In other words,

J(Φ[µ]) =

(0, β0 β1 β2 . . .1, γ1 γ2 γ3 . . .

)It follows that in this case, the coefficient stripping operation of Exercise 6.50 is the left inverse of Φ.

Exercise 9.19. Prove, from the definition of Φ, that for µ ∈ D(d), the functional BΦ[µ] is conditionallypositive. Conclude that Φ preserves positivity.

Evolution equation.

Theorem 9.20. For any ρ ∈ D(d) and any t ≥ 0,

Bt[Φ[ρ]] = Φ[ρ γt],

where γt = S(0, t) for d = 1, and in general is the distribution of a free semicircular system withcovariance tI .

Proof. To keep the notation short, we will denote

RU = R[xu(j) : j ∈ U ].

BBt[Φ[ρ]] =∑

π∈NC(n)

t|π|−1BΦ[ρ]π [xu(1), . . . , xu(n)] (by definition of Bt)

=∑

π∈NC(n)

t|π|−1∏Vi∈π

δu(min(Vi))=u(max(Vi))MρVi\min(Vi),max(Vi) (by definition of Φ)

=∑

π∈NC(n)

t|π|−1∏Vi∈π

Rγ1min(Vi),max(Vi)M

ρVi\min(Vi),max(Vi)

=∑

π∈NC(n)

t|π|−1∏Vi∈π

∑σi∈NC(Vi)

Rγ1o(σi)

∏U∈Inn(σi)

RρU

(by the moment-cumulant formula and because Rγ1o(σi)

=

δu(min(Vi)=u(max(Vi)), |o(σi)| = 2,

0, otherwise)

91

Page 94: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

=∑

π∈NC(n)

t−1∏Vi∈π

∑σi∈NC(Vi)

Rγto(σi)

∏U∈Inn(σi)

RρU (because Rγt = tRγ1)

=∑

σ∈NC(n)

∑σπ

t−1∏Vi∈π

Rγto(σ|Vi )

∏U∈Inn(σ|Vi )

RρU (by definition of )

=∑

σ∈NC(n)

∑o(σ)∈S⊂σ

t−1∏U∈S

RγtU

∏U 6∈S

RρU (by Proposition 9.8)

=∑

σ∈NC(n)

t−1Rγto(σ)

∏U∈Inn(σ)

(RγtU +Rρ

U

)=

∑σ∈NC(n)

t−1Rγto(σ)

∏U∈Inn(σ)

RργtU (by definition of )

=∑

σ∈NC(n)

Rγ1o(σ)

∏U∈Inn(σ)

RργtU

= δu(1)=u(n)Mργt [xu(2), . . . , xu(n−1)] = BΦ[ργt].

Remark 9.21 (A 2012). Using

Bt[ρ] = (Bt−1 B)[ρ] =(B[ρ]t

)] 1t ,

the identity Bt[Φ[ρ]] = Φ[ρ γt] implies

B[Φ[ρ]]t = Φ[ρ γt]]t.

Denote µ = B[Φ[ρ]]. Then µ is -infinitely divisible, and

µt = µt = Φ[ρ γt]]t.

What is ρ? In the single-variable case,

RB[Φ[ρ]]n = BΦ[ρ]

n = Mρn−2.

Thus ρ is the generator of the -semigroup µt : t ≥ 0. This calculation shows that any free convolutionsemigroup (with finite variance), once stripped, gives a semicircular evolution.

92

Page 95: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 10

∗-distribution of a non-self-adjoint element

See Lectures 1, 2, 11 of [NS06].

Definition 10.1. For a ∈ (A, ϕ), its ∗-distribution is the joint distribution of (a, a∗), a state on C〈x, x∗〉.

Notation 10.2. Let~ε = (ε(1), . . . , ε(n)), ε(i) ∈ ∗, ·

DenoteP2(~ε) = π ∈ P2(n) : each block of π connects a ∗ and a ·

and NC2(~ε) = P2(~ε) ∩ NC2(n). See Figure 10.

Example 10.3. Let u be a Haar unitary, so that

ϕ[un] = δn=0, n ∈ Z.

Writeu~ε = uε(1)uε(2) . . . uε(n),

so that each term is either u∗ or u. Then the ∗-distribution is of u is

µ[x~ε]

= ϕ[u~ε]

= ϕ[u|i:ε(i)=·|−|i:ε(i)=∗|

]=

1, |i : ε(i) = ·| = |i : ε(i) = ∗| ,0, otherwise.

More pictorially, we can identify ~ε with a lattice path, with ∗ corresponding to a rising step (1, 1), and · toa falling step (1,−1). See Figure 10. Note that the paths start at height zero, but are not necessarily Dyckpaths. In this language,

µ[x~ε]

=

1, path ends at height 0,

0, otherwise.

What are their free cumulants? See Lecture 15 of [NS06].

93

Page 96: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

94

Page 97: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 10.4. Let ` be a free creation operator, `∗` = 1. The ∗-distribution of ` is

µ`[x~ε]

= ϕ[`~ε]

=

1, can reduce the word `~ε to 1,

0, otherwise.=

1, ~ε corresponds to a Dyck path,0, otherwise.

See Figure 10. It follows that the joint free cumulants of `, `∗ are R[`∗, `] = 1, with the rest all equal tozero.

Definition 10.5. An operator c is a standard circular element if c = 1√2(s1 + is2), where s1, s2 are free

standard semicircular.

Example 10.6. Let c be a standard circular element. What is its ∗-distribution? Clearly its free cumulantsare

R[c, c∗] = R[c∗, c] =1

2ϕ [(s1 + is2)(s1 − is2)] = 1,

with the rest all equal to zero. What are the ∗-moments? Using Notation 10.2,

µc[x~ε]

= |NC2(~ε)| =?

Proposition 10.7. c∗c has a free Poisson distribution.

Note: this is not obvious, as

c∗c =1

2(s2

1 + s22) +

i

2(s1s2 − s2s1).

Proof. To show:ϕ [(c∗c)n] =

[s2n],

where s is standard semicircular. Indeed, using the non-crossing property (see Figure 10),

ϕ [(c∗c)n] = |NC2(∗, ·, ∗, ·, . . . , ∗, ·)| = |NC2(2n)| = ϕ[s2n].

Remark 10.8. In fact (Oravecz 2001, Kemp, Speicher 2007, Schumacher, Yan 2012),

ϕ [((c∗)pcp)n] =

∣∣∣∣∣∣NC2(∗, . . . , ∗︸ ︷︷ ︸p

, ·, . . . , ·︸ ︷︷ ︸p

, . . . , ∗, . . . , ∗︸ ︷︷ ︸p

, ·, . . . , ·︸ ︷︷ ︸p

)

∣∣∣∣∣∣ = Fuss-Catalan number

from Proposition 5.12. What about more general ~ε?

Exercise 10.9. Let c1, . . . , cp are freely independent circular elements. Use the idea from Proposition 10.7and the preceding remark to compute

ϕ[(c∗p . . . c

∗2c∗1c1c2 . . . cp

)n].

95

Page 98: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 10.10. Let f1 ⊥ f2 be unit vectors and X(f1), X(f2) corresponding q-Gaussian elements fromProposition 6.55.

Y =1√2

(X(f1) + iX(f2))

is a q-circular element. Using the q-Wick formula (Proposition 6.56),

ϕ[Y ~ε]

=∑

π∈P2(~ε)

qcr(π).

Know very little about them.

Question 10.11. Compute

‖Y ‖2 = ‖Y ∗Y ‖ = limn→∞

ϕ [(Y ∗Y )n]1/n = limn→∞

∑π∈P2(∗,·,∗,·,...,∗,·)

qcr(π)

1/n

.

96

Page 99: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 11

Combinatorics of random matrices.

See Lectures 22, 23 of [NS06], Chapter 7 of [Gui09].

11.1 Gaussian random matrices.

Remark 11.1 (Random matrices). Recall from Example 1.9: let (Ω, P ) be a probability space, and

L∞−(Ω, P ) =⋂p≥1

Lp(Ω, P )

the algebra of random variables all of whose moments are finite. On this algebra we have the expectationfunctional

E[f ] =

∫Ω

f dP =

∫Rx dµf (x),

where µf is the distribution of f . For each N , we have the algebra of N ×N random matrices

MN(C)⊗ L∞−(Ω, P ) 'MN(L∞−(Ω, P )).

We can identify A ∈ MN(L∞−(Ω, P )) with a N2-tuple of random variables, so it has a N2-dimensionaldistribution. But also: for

tr[A] =1

NTr[A] =

1

N

N∑i=1

Aii

the normalized trace (note: itself a random variable)

(MN(C)⊗ L∞−(Ω, P ),E⊗ tr) = (MN(L∞−(Ω, P )),E tr)

is an n.c. probability space, so A also has a 1-dimensional distribution as an element in this space.

97

Page 100: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

98

Page 101: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

99

Page 102: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Definition 11.2 (Gaussian random variables). X has the Gaussian (or normal) distribution N (0, t) withmean zero and variance t if

dµX(x) =1√2πt

e−x2/2t dx.

(X1, . . . , Xd) are jointly Gaussian with mean 0 and (positive definite) covariance matrix Q = (Qij)di,j=1 if

dµX1,X2,...,Xd(x1, . . . , xd) =1

Zexp

(−

d∑i,j=1

xi(Q−1)ijxj/2

)dx1 . . . dxd.

Remark 11.3. Z is the normalization constant, in this case

Z =

∫Rd

exp

(−

d∑i,j=1

xi(Q−1)ijxj/2

)dx1 . . . dxd.

We will use the same notation for other normalization constants.

In particular, X1, . . . , Xd are independent Gaussian if all Qij = 0 for i 6= j (in the jointly Gaussian case,uncorrelated implies independent).

Exercise 11.4. Show that the (classical) cumulant generating function of a jointly Gaussian distributionis

`(z1, . . . , zd) =1

2

d∑i,j=1

ziQijzj.

Hint: denote x = (x1, . . . , xd)t, and the same for y, z. Re-write the jointly Gaussian density as

dµ(x) =1

Ze−x

tQ−1x/2 dx.

Recall from Remark 5.7 that`µ(z) = log

∫Rex

tz dµ(x)

(why?). Now use the change of variables x = y +Qz.

Corollary 11.5 (Wick formula). If X1, . . . , Xd are jointly Gaussian,

E[Xu(1)Xu(2) . . . Xu(n)

]=

∑π∈P2(n)

∏(i,j)∈π

E[Xu(i)Xu(j)

],

where P2(n) are the pair partitions. Equivalently, their classical cumulants are

k[Xu(1), . . . , Xu(n)] =

0, n 6= 2,

E[Xu(1)Xu(2)], n = 2.

100

Page 103: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.6 (Complex Gaussians). A complex-valued random variable Y is complex Gaussian if <Y ,=Y are independent (centered) Gaussian. We will only consider <Y ∼ N (0, a), =Y ∼ N (0, a), with thesame variance. Note that

E[Y ] = 0

andE[Y 2] = E[(<Y )2 − (=Y )2 + 2i(<Y )(=Y )] = a− a = 0.

However,E[Y Y ] = E[(<Y )2 + (=Y )2] = 2a.

Moreover, if Y1, . . . , Yd are independent complex Gaussian (i.e. their real and imaginary parts are inde-pendent), by multi-linearity of classical cumulants all of their joint cumulants are zero except k[Yi, Y

∗i ] =

k[Y ∗i , Yi] = 2ai.

Corollary 11.7 (Wick formula). Using Notation 10.2, for independent complex Gaussian Y1, . . . , Yd,

E[Yε(1)u(1) . . . Y

ε(n)u(n)

]=

∑π∈P2(~ε)

∏(i,j)∈π

E[Yε(i)u(i) Y

ε(j)u(j)

].

Remark 11.8. The distribution of Y as a C ' R2-valued random variable is

1

Ze−(x2+y2)/2a dx dy.

Definition 11.9. A random N ×N matrix X is a GUE matrix (Gaussian unitary ensemble) if

a. X is self-adjoint, Xij = Xji, each Xii real.

b. Xii,<Xij,=Xij for 1 ≤ i ≤ N , i < j are independent Gaussian.

c. Xii ∼ N (0, 1N

), <Xij,=Xij ∼ N (0, 12N

). In other words,

E [XijXkl] =1

Nδi=lδj=k.

Note that

E tr[X2] =1

N

N∑i,j=1

E[XijXji] = 1.

Remark 11.10. We can identify the space of self-adjoint N ×N matrices with

(xii : 1 ≤ 1 ≤ N,<xij,=xij : 1 ≤ i < j ≤ N) ' RN2

.

SinceN∏i=1

e−x2ii/(2/N)

∏i<j

e−((<xij)2+(=xij)2)/(1/N) = e−∑i,j |xij |

2/(2/N) = e−N Tr[X2]/2,

101

Page 104: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

the RN2 distribution of a GUE matrix is

=1

Ze−N Tr[X2]/2

N∏i=1

dxii∏i<j

d<xij d=xij =1

Ze−N Tr[X2]/2 dX.

Exercise 11.11.

a. Show that a d-dimensional Gaussian distribution dµ(x) with the identity covariance matrix (so thatits components are independent and have variance 1) is rotationally invariant, in the sense that

dµ(x) = dµ(Ox)

for any orthogonal d × d matrix O. (Maxwell’s theorem asserts that up to rescaling, these are theonly rotationally invariant distributions with independent components).

b. Show that a GUE matrix is unitarily invariant, in the sense that for any N × N unitary matrix U ,if X is an N × N GUE matrix, so is UXU∗. (You can use the definition, or Remark 11.10). SeeRemark 11.48.

Example 11.12. We would like to compute

E tr[X4] =1

NE[∑

Xi(1)i(2)Xi(2)i(3)Xi(3)i(4)Xi(4)i(1)

].

Apply the Wick formula:

E[Xi(j)i(j+1)Xi(k)i(k+1)

]=

1N, if i(j) = i(k + 1), i(j + 1) = i(k),

0, otherwise.

Draw a ribbon (fat) 4-star (with a marked edge). See Figure 11. Note that these pictures are dual to thoseused in topology. The rule above says we match the two edges with the correct orientation. Thus

E tr[X4] =1

N

∑i(1),i(2),i(3),i(4)

(E[Xi(1)i(2)Xi(2)i(3)

]E[Xi(3)i(4)Xi(4)i(1)

]δi(1)=i(3)

+ E[Xi(1)i(2)Xi(4)i(1)

]E[Xi(2)i(3)Xi(3)i(4)

]δi(2)=i(4)

+ E[Xi(1)i(2)Xi(3)i(4)

]E[Xi(2)i(3)Xi(4)i(1)

]δi(1)=i(4)δi(2)=i(3)δi(2)=i(1)

)=

1

N

1

N2(N3 +N3 +N) = 1 + 1 +

1

N2.

What about a general moment E tr[X2n]? It corresponds to a ribbon 2n-star (with a marked edge). SeeFigure 11.

In the diagram, identify edges pairwise, in an orientation-preserving way, get a closed surface with a graphdrawn on it. # of vertices = 1. # of edges = n. # of faces (loops) = # free parameters. So for eachπ ∈ P2(2n), get a term

1

NN# faces 1

Nn= N#F−#E−1.

102

Page 105: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.13. The Euler characteristic of a connected surface is

χ = #F −#E + #V = 2− 2g,

where g is the genus of the surface. Thus in the preceding example,

N#F−#E−1 = Nχ−2 =1

N2g.

Proposition 11.14. For X GUE,

E tr[X2n] =∑

π∈P2(2n)

1

N2g(π)=

n∑g=0

1

N2g|π ∈ P2(2n) of genus g| .

Definition 11.15. A map is a connected (ribbon) graph embedded in an (orientable) compact surface inwhich all edges are matched in an orientation-preserving way, and each face (loop) is homeomorphic to adisk. The genus of a map is the genus of a surface into which it is embedded.

Remark 11.16. Note that

Maps with one vertex and n edges ↔ P2(2n).

Thus we proved: for X GUE,

E tr[X2n] =n∑g=0

1

N2g|Mg(2n)| .

In particular, for large N

E tr[X2n] ≈ |M0(2n)| = #(planar maps) = |NC2(2n)| = cn = ϕ[s2n],

where s is standard semicircular.

Definition 11.17. (XN)∞N=1 have asymptotic semicircular distribution if µXN → µs as N →∞. Thus

XN ∈ (MN(L∞−),E tr) → s ∈ (A, ϕ)

in distribution. Compare with Definition 1.13. Equivalently, mn(X)→ mn(s).

Corollary 11.18. Let X be a GUE matrix. Then

E tr[X2n]→ cn = ϕ[s2n],

and so the distribution of a GUE matrix is asymptotically semicircular.

103

Page 106: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.19 (Wigner’s theorem). Much stronger statements hold. First, in fact

tr[X2n]→ ϕ[s2n] almost surely.

Second, the assumption that X is GUE can be weakened to assuming it is a Wigner matrix: a self-adjointmatrix for which

E [Xij] = 0, limN→∞

1

N2

N∑i,j=1

∣∣N E [|Xij|2]− 1∣∣ = 0

and for each n,supN

sup1≤i,j≤N

E[∣∣∣√NXij

∣∣∣n] <∞.Third, recall from Exercise 2.23 that for a self-adjoint matrix, its distribution is 1

N

∑Ni=1 δλi , where λi

are its (real) eigenvalues. Similarly, we may identify the distribution of a random matrix as

µN =1

N(δλ1 + . . .+ δλN ) ,

the random spectral distribution of X , which is a random measure. Then in fact, for a Wigner matrix, asN →∞,

µN →1

√4− x2 dx,

the Wigner semicircle law.

The results go back to the work of (Wigner 1958) and (Dyson 1962) on modeling a heavy nucleus, and area (weak) example of universality, which is more properly exhibited in the behavior of eigenvalue spacingsrather than the eigenvalues themselves.

Exercise 11.20. Let (A, ϕ) be a n.c. probability space. Let

si : 1 ≤ i ≤ N ⊂ A, cij : 1 ≤ i < j ≤ N ⊂ A

be standard semicircular, respectively, standard circular, all free among themselves. Let X ∈ MN(A)have entries

Xii =1√Nsi, Xij =

1√Ncij, Xji =

1√Nc∗ij for i < j.

Prove that for any N , the distribution of XN ∈ (MN(A), ϕ tr) is (exactly, not asymptotically) standardsemicircular. (Adapt Corollary 11.7 and the argument in Example 11.12 to this setting.)

11.2 Map enumeration.

Change the point of view: think of the matrix integrals as generating functions for maps.

104

Page 107: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.21. We know that

E tr[X2n] =n∑g=0

1

N2g|maps with 1 vertex, n edges, of genus g| .

What about

E[N Tr(Xn(1))N Tr(Xn(2)) . . . N Tr(Xn(k))

]= E

[k∏i=1

N Tr(Xn(i))

]= Nk

∑E[Xi(1)i(2) . . . Xi(n(1))i(1)Xi(n(1)+1)i(n(1)+2) . . . Xi(n(1)+n(2))i(n(1)+1) . . .

]?

k stars, of types n(1), . . . , n(k), pair off all edges in orientation-preserving ways. See Figure 11 for asingle term from the expansion of E [N Tr(X4)NTr(X3)NTr(X3)]. Get k vertices, with the number offaces again equal to the number of free parameters. Apply the Wick theorem, each term in the expressionabove is of the form

Nk 1

N#EN#F = N#V−#E+#F .

In general, if a map has c components,

2c−c∑i=1

gi = 2c− 2g = #V −#E + #F.

Thus

E

[k∏i=1

N Tr(Xn(i))

]=∞∑g=0

∞∑c=1

1

N2g−2c|Gg,c(n(i), 1 ≤ i ≤ k)| ,

where

Gg,c(n(i), 1 ≤ i ≤ k) = union of c maps built out of stars of type n(1), . . . , n(k), of total genus g

What is the largest power of N? Unclear.

Remark 11.22. Next,

E exp

(k∑i=1

tiN Tr[X i]

)= E

∞∑n=0

1

n!

(k∑i=1

tiN Tr[X i]

)n

= E∞∑n=0

1

n!

∑n(1)+...+n(k)=n

(n

n(1), n(2), . . . , n(k)

) k∏i=1

tn(i)i (N Tr[X i])n(i)

= E∞∑n=0

∑n(1)+...+n(k)=n

k∏i=1

tn(i)i

n(i)!(N Tr[X i])n(i)

=∞∑g=0

∞∑c=1

1

N2g−2c

∞∑n(1),...,n(k)=0

k∏i=1

tn(i)i

n(i)!|Gg,c((i, n(i)), 1 ≤ i ≤ k)| .

Note: powers of N may be positive or negative; not very nice.

105

Page 108: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.23 (Connected maps). A general combinatorial principle:∑diagrams

Fdiagram =∑

π∈partitions of the components of the diagram

Fπ.

This is the relation between moments and classical cumulants, and we know from Remark 5.7 that theseare also related via

∞∑n=0

1

n!mnz

n = exp

(∞∑n=1

1

n!knz

n

).

It follows that the generating function for connected maps is

logE exp

(k∑i=1

tiN Tr[X i]

)=∞∑g=0

1

N2g−2

∑n(1),...,n(k)≥0n6=(0,0,...,0)

k∏i=1

tn(i)i

n(i)!|Mg((i, n(i)), 1 ≤ i ≤ k)| .

where

Mg((i, n(i)), 1 ≤ i ≤ k) = maps of genus g built out of n(i) stars of type i, 1 ≤ i ≤ k .

Corollary 11.24. The generating function for connected maps is

1

N2logE exp

(k∑i=1

tiN Tr[X i]

)=∞∑g=0

1

N2g

∑n(1),...,n(k)≥0n 6=(0,0,...,0)

k∏i=1

tn(i)i

n(i)!|Mg((i, n(i)), 1 ≤ i ≤ k)| .

Remark 11.25 (Further results).

a. One can look at maps where every edge has a color. These arise by gluing stars where only edgesof the same color are glued. Their generating functions are matrix integrals in several independentGUE matrices, since in the Wick formula, only matrices of the same color can be paired.

b. The expansions above are purely formal. Proofs that these are actual asymptotic expansions arevery hard, see (Ercolani, McLaughlin 2003), (Guionnet, Maurel-Segala 2006, 2007). Their resultsapply to 1

N2 logE exp (N Tr[V (X)]), where V is a convex potential.

c. Physicists use the term “planar sector of quantum field theory” or “planar approximation” for theleading term in the expansion.

d. Note that since X is GUE,

E exp

(n∑i=1

tiN Tr[X i]

)=

∫exp

(N Tr

[−X2/2 +

n∑i=1

tiXi

])dX = Z,

106

Page 109: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

where1

Zexp

(N Tr

[−X2/2 +

n∑i=1

tiXi

])dX

is a probability measure. Z is called the partition function. It can be thought of as a normalizationconstant, but 1

N2 logZ can also be thought of as a generating function for maps.

e. Sometimes the matrix integrals can be computed directly, thus giving formulas for generating func-tions. Namely, for the matrix model above and sufficiently nice V , the asymptotic distribution µ isa minimizer of ∫

log |x− y| dµ(x) dµ(y)−∫V (x) dµ(x) (11.1)

In particular, it satisfies the Schwinger-Dyson equation

2

∫dµ(x)

y − x= V ′(y), y ∈ supp(µ). (11.2)

This has generalizations to colored models and multi-variate equations, using cyclic derivatives etc.

Exercise 11.26. Sketch the arguments below; I am not asking for complete technical details.

a. Explain why equation 11.1 implies that

2

∫log |x− y| dµ(x)− V (y) = C, y ∈ supp(µ),

where C is a constant independent of y. Note that differentiating this expression we get equa-tion 11.2. Hint: Consider deformations µ+ εν for a signed measure ν.

b. Show that for the quadratic potential V (x) = x2/2, the semicircular distribution is a solution ofequation 11.2. Hint: use the real part of the Cauchy transform. Note that care is required in evendefining

∫ dµ(x)y−x for y real.

Example 11.27. Let X be GUE, and D(1), . . . , D(n) (for each N ) fixed (non-random) matrices.

E tr[XD(1) . . . XD(n)

]=?

ConsiderE tr

[XD(1)XD(2)XD(3)XD(4)

].

See Figure 11. According to the Wick formula, this expression is again a sum of three terms. The firstone is

1

NE[Xi(1)i(2)Di(2)i(2)Xi(2)i(1)Di(1)i(3)Xi(3)i(4)Di(4)i(4)Xi(4)i(3)Di(3)i(1)

]=

1

N

1

N2

∑i(2)

Di(2)i(2)

∑i(4)

Di(4)i(4)

∑i(1),i(3)

Di(1)i(3)Di(3)i(1) = tr[D(1)] tr[D(3)] tr[D(2)D(4)].

107

Page 110: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Similarly, the second term is tr[D(2)] tr[D(4)] tr[D(1)D(3)]. In the third term,

1

NE[Xi(1)i(2)Di(2)i(3)Xi(3)i(4)Di(4)i(2)Xi(2)i(1)Di(1)i(4)Xi(4)i(3)Di(3)i(1)

]=

1

N2tr[D(1)D(4)D(3)D(2)

].

Note the order!

11.3 Partitions and permutations.

Remark 11.28. Denote by Sym(n) the permutations of the set [n]. We will use the cycle notation(1 2 3 4 5 6 73 6 7 1 5 2 4

)↔ (1374)(26)(5).

This suggests a natural embedding

P : P(n) → Sym(n), π 7→ Pπ,

where a block of a partition is mapped to a cycle of a permutation, in increasing order.

For a partition π, had for example

Rπ[a1, a2, . . . , an] =∏V ∈π

R[ai : i ∈ V ].

For a permutation α and matrices, can write

trα [A1, A2, . . . , An] =∏

V a cycle in αV=(u(1),u(2),...,u(k))

tr[Au(1)Au(2) . . . Au(k)

].

Note that this is well-defined because tr is cyclically symmetric.

Denote by SNC(n) the image of NC(n) in Sym(n) under the embedding above. We would like to find amore intrinsic definition of this subset.

Remark 11.29. Recall: a transposition is a 2-cycle (i, j). Sym(n) is generated by transpositions. In fact,

(u(1)u(2) . . . u(k)) = (u(1)u(2)) · (u(2)u(3)) · . . . · (u(k − 2)u(k − 1)) · (u(k − 1)u(k)).

Lemma 11.30. Let α ∈ Sym(n), and σ = (ab) be a transposition. Then

#cyc(ασ) =

#cyc(α) + 1, if a, b are in the same cycle of α,#cyc(α)− 1, if a, b are in different cycles of α.

108

Page 111: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proof. See Figure 11. The result follows from the observation that

(u(1) . . . u(i) . . . u(j) . . . u(k)) · (u(i)u(j)) =(u(1) . . . u(i− 1)u(i)u(j + 1) . . . u(k))

(u(i+ 1) . . . u(j − 1)u(j))

and

(u(1) . . . u(k))(v(1) . . . v(m))·(u(i)v(j)) = (u(1) . . . u(i)v(j+1) . . . v(m)v(1) . . . v(j)u(i+1) . . . u(k)).

Definition 11.31. For α ∈ Sym(n), denote

|α| = min k : α = σ1σ2 . . . σk for σi transpositions ,

|e| = 0, where e is the identity element in Sym(n). Also denote

γ = (1, 2, . . . , n) ∈ Sym(n)

the long cycle.

Lemma 11.32. Let α, β ∈ Sym(n).

a. d(α, β) = |α−1β| is a metric on Sym(n). (In fact, it is a distance in a certain Cayley graph ofSym(n).)

b. |α| = n−#cyc(α).

c. |αβ| = |βα|.

Proof. For part (a), d(α, α) = |e| = 0, since α−1ν = (α−1β)(β−1ν),∣∣α−1ν∣∣ ≤ ∣∣α−1β

∣∣+∣∣β−1ν

∣∣ ,and

α−1β = σ1σ2 . . . σk ⇔ β−1α = σk . . . σ2σ1.

For part (b), by Remark 11.29,

n−#cyc(α) =∑

V a cycle in α

(|V | − 1) ≥ |α| .

On the other hand, since #cyc(e) = n, by Lemma 11.30

#cyc(σ1 . . . σk) ≥ n− k,

and so #cyc(α) ≥ n−|α|. The result follows. Part (c) is equivalent to proving that in general, |α−1βα| =|β|, which follows from part (b), since conjugation preserves the cycle structure.

109

Page 112: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Theorem 11.33. (Biane 1997)

a. α ∈ SNC(n) if and only if|α|+

∣∣α−1γ∣∣ = n− 1.

Since d(e, γ) = n− 1, this says that

d(e, α) + d(α, γ) = d(e, γ),

so α lies on a geodesic from e to γ.

b. For α, β ∈ SNC(n), denote α ≤ β if

|α|+∣∣α−1β

∣∣+∣∣β−1γ

∣∣ = n− 1.

Equivalently, α and β lie on the same geodesic from e to γ, with α closer to e. Then the mapNC(n)→ SNC(n) is a lattice isomorphism.

Proof. Suppose |α|+ |α−1γ| = n− 1, so that

α = σ1σ2 . . . σk, α−1γ = σk+1 . . . σn−1.

Then γ = σ1σ2 . . . σn−1. Recall that e has n cycles, γ has 1 cycle, and each multiplication by σj changesthe number of cycles by 1. It follows that

#cyc(σ1 . . . σj) = n− j.

Therefore going from σ1 . . . σj toσ1 . . . σj−1 = (σ1 . . . σj)σj,

σj cuts one cycle into two, which by the proof of Lemma 11.30 happens in a non-crossing way. Theconverse is similar. For part (b), we note that α ≤ β if and only if we can write

α = σ1 . . . σj, β = σ1 . . . σj . . . σk, γ = σ1 . . . σj . . . σk . . . σn−1.

Lemma 11.34. Under the identification NC(n)↔ SNC(n),

K[π] ↔ P−1π γ.

Sketch of proof. See Figure 11. The proof is by induction. For α = σ1 = (i, j), i < j,

α−1γ = (1 . . . (i− 1)j . . . n)(i . . . (j − 1)) = K[α].

We want to build up to the case of

α = σ1 . . . σk, α−1γ = σk+1 . . . σn−1.

On the induction step, multiplying ασj connects two cycles of α, while σjα−1γ cuts a cycle into two.

110

Page 113: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.35. Biane’s embedding above allows us to define noncrossing partitions for other Coxetergroups, for example noncrossing partitions of type B.

Notation 11.36. For α ∈ Sym(n),

trα [X1, . . . , Xn] =∏

cycles of α

tr

∏i∈ cycle

Xi

.For example,

tr(152)(34) [X1, X2, X3, X4, X5] = tr [X1X5X2] tr [X3X4] .

Note: this makes sense since tr is cyclically symmetric.

Theorem 11.37. Let X be a GUE matrix, and D(1), . . . , D(2n) be fixed matrices.

a.E tr[X2n] =

∑π∈P2(2n)

1

Nn−#cyc(πγ)+1.

b.E tr

[XD(1)XD(2) . . . XD(2n)

]=

∑π∈P2(2n)

1

Nn−#cyc(πγ)+1trπγ

[D(1), D(2), . . . , D(2n)

].

Proof. For (a), using Example 11.12, we only need to show that

#cyc(πγ) = #faces in the map corresponding to π = #free parameters.

We are computingE[Xi(1)i(2)Xi(2)i(3) . . . Xi(2n)i(1)

].

The term in the Wick sum corresponding to π pairs together

Xi(j)i(j+1), Xi(π(j))i(π(j)+1).

In other words,i(j) = i(π(j) + 1) = i(γπ(j)), i(j + 1) = i(γ(j)) = i(π(j)).

Thus indeed,#free parameters = #cyc(γπ) = #cyc(πγ).

For part (b), we write

E tr[XD(1)XD(2) . . . D(2n)

]=∑

E[Xi(1)u(1)Du(1)i(2)Xi(2)u(2)Du(2)i(3) . . . Xi(2n)u(2n)Du(2n)i(1)

].

111

Page 114: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

From the Wick formula i(j) = u(π(j)), u(j) = i(π(j)). So get∑π∈P2(2n)

1

Nn+1

∑i(j)=u(π(j))

D(1)u(1)i(2)D

(2)u(2)i(3) . . . D

(2n)u(2n)i(1)

=∑

π∈P2(2n)

1

Nn+1

∑u(j)

D(1)u(1)u(π(2))D

(2)u(2)u(π(3)) . . . D

(2n)u(2n)u(π(1))

=∑

π∈P2(2n)

1

Nn+1

∑u(j)

D(1)u(1)u(πγ(1))D

(πγ(1))u(πγ(1))u(πγπγ(1))D

(πγ)2(1)

u((πγ)2(1))u((πγ)3(1)) . . .

=∑

π∈P2(2n)

1

Nn+1Trπγ

[D(1), D(2), . . . , D(2n)

]=

∑π∈P2(2n)

1

Nn−#cyc(πγ)+1trπγ

[D(1), D(2), . . . , D(2n)

].

Remark 11.38. Note that the leading term in the expansion corresponds to g = 0, but also to π ∈NC2(2n). Equivalently,

n−#cyc(πγ) + 1 = 2n−#cyc(πγ)− n+ 1 = |πγ| − n+ 1 =∣∣π−1γ

∣∣+ |π| − 2n+ 1 = 0

if and only if π ∈ NC2(2n) ⊂ Sym(2n).

11.4 Asymptotic free independence for Gaussian random matrices.

Definition 11.39. Recall from Definition 1.13: let a(N)1 , a

(N)2 , . . . , a

(N)k ∈ (AN , ϕN).

a. We say that(a

(N)1 , . . . , a

(N)k )→ (a1, . . . , ak) ⊂ (A, ϕ)

in distribution if for each ~u,

ϕN

[a

(N)u(1)a

(N)u(2) . . . a

(N)u(n)

]→ ϕ

[au(1)au(2) . . . au(n)

]as N →∞. Equivalently,

RϕN[a

(N)u(1), a

(N)u(2), . . . , a

(N)u(n)

]→ Rϕ

[au(1), au(2), . . . , au(n)

].

In this case we say that (a(N)1 , a

(N)2 , . . . , a

(N)k ) have the asymptotic distribution

µa1,a2,...,ak .

112

Page 115: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

b. a(N)1 , a

(N)2 , . . . , a

(N)k are asymptotically free if (a1, . . . , ak) ⊂ (A, ϕ) are free. Equivalently,

RϕN[a

(N)u(1), a

(N)u(2), . . . , a

(N)u(n)

]N→∞−→ 0

unless all u(1) = u(2) = . . . = u(n).

Note that limits of tracial joint distributions are tracial.

Remark 11.40. Let X1, . . . , Xk be entry-wise independent GUE. Say π ∈ P(n) is consistent with

~u = (u(1), u(2), . . . , u(n)), 1 ≤ u(i) ≤ k

ifiπ∼ j ⇒ u(i) = u(j).

See Figure11. Denote such π by P~u(n). Then as in Theorem 11.37,

E tr[Xu(1)Xu(2) . . . Xu(2n)

]=

∑π∈P2,~u(2n)

1

Nn−#cyc(πγ)+1.

and

E tr[Xu(1)D

(1)Xu(2)D(2) . . . Xu(2n)D

(2n)]

=∑

π∈P2,~u(2n)

1

Nn−#cyc(πγ)+1trπγ

[D(1), D(2), . . . , D(2n)

].

Theorem 11.41. Let X1, . . . , Xp be entry-wise independent GUE matrices, and D1, . . . , Dq be non-random matrices with an asymptotic (tracial) joint distribution µd1,...,dq .

a. X1, . . . , Xp are asymptotically freely independent. Consequently, in distribution

(X1, . . . , Xp)→ (s1, . . . , sp),

where (s1, . . . , sp) is a free semicircular system.

b. More generally, in distribution

(X1, . . . , Xp, D1, . . . , Dq)→ (s1, . . . , sp, d1, . . . , dq),

where s1, . . . , sp are free semicircular elements freely independent from d1, . . . , dq.

Proof. Taking the limit N →∞ in the preceding remark and using Remark 11.38,

E tr[Xu(1)Xu(2) . . . Xu(n)

]→

∑π∈NC2,~u(n)

1,

113

Page 116: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

which implies that for all π ∈ NC(n)

[Xu(1), Xu(2), . . . , Xu(n)

]→

1, π ∈ NC2(n) ~u-consistent,0, otherwise.

= Rπ

[su(1), su(2), . . . , su(n)

].

Similarly, taking D(i) = fi(D1, . . . , Dq) in the preceding remark,

limN→∞

E tr[Xu(1)f1(D1, . . . , Dq)Xu(2)f2(D1, . . . , Dq) . . . Xu(n)fn(D1, . . . , Dq)

]= lim

N→∞

∑π∈NC2,~u(n)

trπγ [f1(D1, . . . , Dq), f2(D1, . . . , Dq), . . . , fn(D1, . . . , Dq)]

=∑

π∈NC2,~u(n)

ϕπγ [f1(d1, . . . , dq), f2(d1, . . . , dq), . . . , fn(d1, . . . , dq)]

=∑

π∈NC2,~u(n)

MϕK[π] [f1(d1, . . . , dq), f2(d1, . . . , dq), . . . , fn(d1, . . . , dq)]

=∑

π∈NC(n)

Rϕπ

[su(1), su(2), . . . , su(k)

]Mϕ

K[π] [f1(d1, . . . , dq), f2(d1, . . . , dq), . . . , fn(d1, . . . , dq)]

= ϕ[su(1)f1(d1, . . . , dq)su(2)f2(d1, . . . , dq) . . . su(n)fn(d1, . . . , dq)

],

where we have used Lemma 11.34, properties of free cumulants of semicircular elements, and the resultsfrom Remark 8.1.

Remark 11.42 (Other Gaussian ensembles).

a. GOE (Gaussian orthogonal ensemble): X real symmetric Gaussian. Counts maps on non-orientedsurfaces. The leading term still planar, since any identification of two edges with the oppositeorientation leads to non-zero genus.

b. GSE (Gaussian symplectic ensemble): X certain quaternionic matrices. These all have a commongeneralization to β-ensembles, where the real case corresponds to β = 1, complex to β = 2, andquaternionic to β = 4.

c. Ginibre ensemble: real Gaussian matrices, with no symmetry, all entries independent. Note thatthis is typically not a normal matrix. It still has N complex eigenvalues, and its spectral distributionis a measure on C. Asymptotically,

1

N

N∑i=1

δλi → circular law1

π1|z|≤1 on C.

Old result for Gaussian matrices; for Wigner-type matrices, proved by various authors, culminatingin the proof by (Tao, Vu 2010) under optimal conditions. In distribution,

(X,X∗)→ (c, c∗),

where c is a circular operator, from Definition 10.5. Note that a circular operator is not normal, soits distribution cannot come from a measure; but its Brown measure is a (rescaled) circular law.

114

Page 117: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Exercise 11.43. The oldest random matrix model (Wishart 1928) arose in statistics, and its relativesappear in numerous applications, from cell phone communications to medicine. Let (y1, . . . yK)t be jointlyGaussian, with the K × K covariance matrix Q. For j = 1, 2, . . . , N (with N ≥ K) let (Yij)

Ki=1 be

independent samples from this Gaussian distribution, and Y = (Yij) a K × N random matrix, whosecolumns are independent and identically distributed. Then the N × N matrix 1

NY tY is called a Wishart

matrix. Note that the K ×K matrix

1

N(Y Y t)ij =

1

N

N∑k=1

YikYjk

is the sample covariance matrix for this distribution, and in particular E[

1N

(Y Y t)]

= Q, and that thedistributions of Y tY and Y Y t are closely related, although they live in different spaces.

a. Suppose that Q is the identity matrix, so that all entries of Y are independent. Show that

1

NY tY = X tPX,

whereX is aN×N random matrix all of whose entries are independentN (0, 1N

) random variables,and P a diagonal N ×N matrix with K ones and N −K zeros on the diagonal.

b. Let now both N and K go to infinity, so that K/N → t, 0 ≤ t ≤ 1. According to the precedingremark, (X,X t)→ (c, c∗). Use a variation of Theorem 11.41 and a combination of Proposition 10.7with Lemma 8.2 to conclude that the asymptotic distribution of a Wishart matrix with identitycovariance is πt, the free Poisson distribution, which in this context is called the Marchenko-Pasturdistribution.

A similar argument for general Q shows that the limiting distribution of a general Wishart matrix isfree compound Poisson. An alternative approach of rectangular free probability (which corresponds totaking limits of rectangular matrices directly) was developed by Benaych-Georges. Compare also withExercise 22.18 in [NS06].

11.5 Asymptotic free independence for unitary random matrices.

Remark 11.44 (Unitary random matrices). UN is the group of unitary N ×N complex matrices. On UN ,have the Haar measure, which is right and left invariant. In particular,∫

UNf(U) dU =

∫UNf(V UV ∗) dU =

∫UNV f(U)V ∗ dU.

A random N × N unitary matrix whose distribution is the Haar measure is a Haar unitary matrix. Notethat for any λ ∈ C, |λ| = 1, if U is a random unitary matrix whose distribution is the Haar measure, then∫

UNUn dU =

∫UN

(λU)n d(λU) = λn∫UNUn dU,

115

Page 118: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

and so ∫UNUn dU = δn=0, n ∈ Z.

In particular, in the n.c. probability space (MN(L∞−(Ω, P )),E tr), a Haar unitary matrix is a Haarunitary.

In physics, Haar unitary matrices are called CUE (circular unitary ensemble) matrices.

Theorem 11.45. Let U1, . . . , Up be independent N × N Haar unitary matrices, and (D1, . . . , Dq) non-random matrices converging in ∗-distribution to (d1, . . . , dq) ⊂ (A, ϕ). Then in ∗-distribution,

(U1, . . . , Up, D1, . . . , Dq)→ (u1, . . . , up, d1, . . . , dq),

where u1, . . . , up are free Haar unitaries free from d1, . . . , dq.

Remark 11.46. Recall from Exercise 3.12, that if u1, . . . , up, d0, d1, . . . , dp are as above, then

d0, u1d1u∗1, . . . , updpu

∗p

are free.

Corollary 11.47. Let AN , BN be N × N (non-random) matrices such that AN and BN converge indistribution as N → ∞. Let UN be an N × N Haar unitary matrix. Then UNANU

∗N and BN are

asymptotically free.

Remark 11.48.

a. Let X be an N ×N random matrix, with matrix-valued distribution PX . We say that PX (or X) isunitarily invariant if for any (fixed) V ∈ UN∫

f(X) dPX =

∫f(V XV ∗)dPX .

b. For example, if X is a GUE matrix,

dPX =1

Ze−N Tr[X2]/2 dX

and Tr[(V XV ∗)2] = Tr[X2], so this is a unitarily invariant ensemble (compare with GOE, GSE).More generally, matrix models 1

Ze−Tr[V (X)] dX are unitarily invariant.

c. If X is any random matrix, and U is a Haar unitary matrix, then U∗XU is unitarily invariant (byinvariance of the Haar measure). Thus the corollary says “independent unitarily invariant matricesare asymptotically free”.

116

Page 119: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

d. In particular, asymptotic freeness for unitarily invariant matrices implies asymptotic freeness forGaussian matrices (Theorem 11.41). Conversely, if X is GUE, its polar decomposition is X =U(X∗X)1/2, where U ia a Haar unitary matrix. So we could use the Gaussian result to prove theunitary result. We will use combinatorics instead.

e. GUE are the only matrices which are at once Wigner matrices and unitarily invariant.

Remark 11.49. Letλ1, . . . , λN , ρ1, . . . , ρN ⊂ R

be eigenvalue lists, and A,B be self-adjoint N ×N matrices with these eigenvalues. What are the eigen-values of A+B?

(i) If A,B commute, they have a common basis of eigenvectors, and

eigenvalues of (A+B) = eigenvalues of A+ eigenvalues of B.

(ii) In general, eigenvalues of A+B depend on A,B and not just on λi, ρj. They satisfy the Horninequalities.

(iii) If A,B are in a “general position”: so that B is fixed (without loss of generality, diagonal) and A israndom,

A = U

λ1

. . .λN

U∗,

U a Haar unitary matrix. Then if

µA =1

N

N∑i=1

δλi , µB =1

N

N∑i=1

δρi ,

then µA+B ≈ µA µB for large N . Thus µA+B (asymptotically) depends only on λi, ρj.

Remark 11.50. We will not prove Theorem 11.45. To prove Corollary 11.47, we need to understandexpressions of the form

E tr[UA(1)U∗B(1)UA(2)U∗B(2) . . .

],

where A(i) = Au(i), B(i) = Bv(i). What is the analog of the Wick formula?

Remark 11.51. Let U = (Uij)Ni,j=1 be an N ×N Haar unitary matrix. For N ≥ n,

E[Ui(1)j(1) . . . Ui(n)j(n)Uu(1)v(1) . . . Uu(n)v(n)

]=

∑α,β∈Sym(n)

δi(β(1))=u(1) . . . δi(β(n))=u(n)δj(α(1))=v(1) . . . δj(α(n))=v(n) Wg(N, βα−1).

117

Page 120: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Here for β ∈ Sym(n) and N ≥ n

Wg(N, β) = E[U11 . . . UnnU1β(1) . . . Unβ(n)

]is the Weingarten function.

Example 11.52.E[U11U12U22U12

]= 0,

E[U11U12U12U11

]= Wg(N, (12)) + Wg(N, e),

where for the first term α = e, β = (12), for the second α = β = (12).

Remark 11.53. There is no “useful” formula for Wg. Can write it in terms of characters. Can write it interms of an inverse of an explicit matrix. It is known that Wg(N,α) only depends on the conjugacy class(cycle structure) of α, and

Wg(N,α) = µ(α)1

N2n−#cyc(α)+O

(1

N2n−#cyc(α)+2

).

Moreover, for α = Pσ, β = Pπ ∈ SNC(n), µ(β) = µ(0n, π) and more generally

µ(α−1β) = µ(P−1σ Pπ) = µ(σ, π),

the Mobius function on NC(n).

118

Page 121: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 11.54. Thus we can now compute

E tr[UA(1)U∗B(1)UA(2)U∗B(2) . . . UA(n)U∗B(n)

]=

1

N

∑~i,~j~u,~v

E[Ui(1)j(1)A

(1)j(1)v(1)Uu(1)v(1)B

(1)u(1)i(2) . . . Ui(n)j(n)A

(n)j(n)v(n)Uu(n)v(n)B

(n)u(n)i(1)

]

=1

N

∑~i,~j~u,~v

A(1)j(1)v(1)B

(1)u(1)i(2)A

(2)j(2)v(2)B

(2)u(2)i(3) . . . A

(n)j(n)v(n)B

(n)u(n)i(1)

E[Ui(1)j(1)Uu(1)v(1)Ui(2)j(2)Uu(2)v(2) . . . Ui(n)j(n)Uu(n)v(n)

]=

1

N

∑~i,~j~u,~v

A(1)j(1)v(1)B

(1)u(1)i(2)A

(2)j(2)v(2)B

(2)u(2)i(3) . . . A

(n)j(n)v(n)B

(n)u(n)i(1)

∑α,β∈Sym(n)

δi(β(1))=u(1) . . . δi(β(n))=u(n)δj(α(1))=v(1) . . . δj(α(n))=v(n) Wg(N, βα−1)

=1

N

∑α,β∈Sym(n)

Wg(N, βα−1)∑~j

A(1)j(1)j(α(1)) . . . A

(n)j(n)j(α(n))

∑~u

B(1)

u(1)u(β−1γ(1))B(n)

u(n)u(β−1γ(n))

=1

N

∑α,β∈Sym(n)

Wg(N, βα−1) Trα[A(1), . . . , A(n)

]Trβ−1γ

[B(1), . . . , B(n)

]=

1

N

∑α,β∈Sym(n)

Wg(N, βα−1)N#cyc(α)+#cyc(β−1γ) trα[A(1), . . . , A(n)

]trβ−1γ

[B(1), . . . , B(n)

].

Using the Weingarten function asymptotics, the degree of 1N

in the leading term is

(2n−#cyc(α−1β)) + 1−#cyc(α)−#cyc(β−1γ)

= 2n+ 1− (n−∣∣α−1β

∣∣)− (n− |α|)− (n−∣∣β−1γ

∣∣)= |α|+

∣∣α−1β∣∣+∣∣β−1γ

∣∣− (n− 1).

Recall that |α|+ |α−1β|+ |β−1γ| ≥ n− 1, with equality only if α = Pσ, β = Pπ, α ≤ β, or equivalentlyσ ≤ π. Thus as N →∞, the leading term in the expression above is∑

σ,π∈NC(n)σ≤π

µ(P−1σ Pπ) trσ

[A(1), . . . , A(n)

]trK[π]

[B(1), . . . , B(n)

]+O

(1

N2

).

Corollary 11.55. Let (A(1), . . . , A(n)), (B(1), . . . , B(n)) be N ×N (random or not) matrices such that

(A(1), . . . , A(n))→ (a1, . . . , an) ∈ (A, ϕ), (B(1), . . . , B(n))→ (b1, . . . , bn) ∈ (A, ϕ),

with tracial state ϕ, and UN be an N ×N Haar unitary matrix. Then as N →∞,

E tr[UA(1)U∗B(1) . . . UA(n)U∗B(n)

]→

∑σ,π∈NC(n)

σ≤π

µ(P−1σ Pπ)ϕσ [a1, . . . , an]ϕK[π] [b1, . . . , bn] .

119

Page 122: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Sketch of the proof of Corollary 11.47. Say AN → a, BN → b in distribution. To prove asymptotic freeindependence of UNANU∗N and BN , we want to show that for any fj, gj with ϕ[fj(a)] = ϕ[gj(b)] = 0,the expression (see text)

E tr [f1(UNANU∗N)g1(BN) . . . gn−1(BN)fn(UNANU

∗N)]

= E tr [UNf1(AN)U∗Ng1(BN) . . . gn−1(BN)UNfn(AN)U∗N ]

goes to zero. This expression asymptotically equals∑σ,π∈NC(n)

σ≤π

µ(P−1σ Pπ)ϕσ [f1(a), . . . , fn(a)]ϕK[π] [g1(b), . . . , gn−1(b), 1] .

Using Exercise 11.56 below, we conclude that each term in the sum above involves either ϕ[fi(a)] orϕ[gj(b)], and so equals zero.

Exercise 11.56.

a. For any σ ∈ NC(n), the partition σ ∪K[σ] ∈ NC(2n) has exactly two outer blocks.

b. For any outer block of a non-crossing partition, there is an interval block covered by it.

c. If σ ∈ NC(n) has no singletons, and σ ≤ π, then K[π] has at least 2 singletons.

120

Page 123: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 12

Operator-valued free probability.

This chapter was omitted in the course, so what follows is only an outline. See [Spe98], [NSS02].

B-valued probability spaces.

Definition 12.1. Let A be a ∗-algebra, and B ⊂ A a C∗-algebra. In particular, A is a B-bimodule,

B · A ⊂ A, A · B ⊂ A.

Assume that we have a conditional expectation E : A → B. That is, E is a positive B-bimodule map,

a ≥ 0⇒ E[a] ≥ 0 in B, E [b1ab2] = b1 E[a]b2.

Then (A,E) is a B-valued n.c. probability space.

Example 12.2. (X,Σ, P ) a probability space, Ω ⊂ Σ a sub-σ-algebra. Take A = Σ-measurable func-tions, B = Ω-measurable functions. Then the Radon-Nikodym theorem gives a conditional expectationE : A → B.

Example 12.3. B a C∗-algebra. Let

A = B〈x〉 = C− Span (b0xb1x . . . , xbn : n ≥ 0) /(linearity relations) ' B ⊕ B⊗C2 ⊕ B⊗C3 ⊕ . . . .

Example 12.4. (C, ϕ) an n.c. probability space. Then

(A = Mn(C), tr ϕ)

is also an n.c. probability space. Let

B = Mn(C), E : A → B, A 7→ (ϕ[Aij])ni,j=1.

In particular, may have

(A = Mnk(C) = Mn(C)⊗Mk(C), B = Mn(C), E = Id⊗ tr = partial trace = block-trace).

121

Page 124: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 12.5 (Motivation).

(i) Many scalar-valued constructions work.

(ii) Free products with amalgamation.

(iii) Random matrices with independent blocks, random band matrices.

Definition 12.6. X ∈ (A,B,E). A (B-valued) distribution of X is a completely positive (c.p.) functional

µX : B〈x〉 → B, µX [b0xb1 . . . xbn] = E [b0Xb1 . . . Xbn] = b0 E [Xb1 . . . X] bn.

Same for joint distributions.

Remark 12.7 (Generating functions). Moment-generating function: for b ∈ B,

MX(b) = E[(1− bX)−1

]= µX

[(1− bx)−1

]= 1 + bE[X] + bE[XbX] + bE[XbXbX] + . . . .

Note: involves only the symmetric moments E[XbXb . . . bX], not general moments

E[Xb1Xb2 . . . bn−1X].

To get full information from generating functions, need fully matricial constructions (as studied byVoiculescu 2004, 2010, Belinschi, Popa, Vinnikov 2010, 2011, 2012, Helton, Klep, McCullough 2009,2010, 2011). No analog of Stieltjes inversion.

Functional analysis background

B-valued spectral theory not well developed.

Remark 12.8 (GNS construction). (A,B,E). Can define a B-valued inner product on A,

〈a1, a2〉 = E[a∗2a1].

Note that〈b1a1b2, a2b3〉 = E [b∗3a

∗2b1a1b2] = b∗3 E [(b∗1a2)∗a1] b2 = b∗3 〈a1, b

∗1a2〉 b2.

Remark 12.9 (Tensor products with amalgamation). V1, . . . , Vn B-bimodules, i.e. have commuting ac-tions

B × Vi → Vi, Vi × B → Vi.

Then

V1 ⊗B V2 ⊗B . . .⊗B Vn =

k∑i=1

ξ(i)1 ⊗ . . .⊗ ξ(i)

n : ξ(i)j ∈ Vj

/(B − linearity),

i.e.

ξ1 ⊗B . . .⊗B (b1ξb2 + b3ηb4)⊗B . . .⊗B ξn= ξ1⊗B . . .⊗B ξk−1b1⊗B ξ⊗B b2ξk+1⊗B . . .⊗B ξn + ξ1⊗B . . .⊗B ξk−1b3⊗B η⊗B b4ξk+1⊗B . . .⊗B ξn.

s

122

Page 125: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Definition 12.10. SubalgebrasA1, . . . ,An ⊂ (A,B,E) are freely independent (with amalgamation) overB if whenever u(1) 6= u(2) 6= . . . 6= u(k), ai ∈ Au(i), E[ai] = 0 (in B), then

E [a1 . . . ak] = 0.

Remark 12.11 (Free product with amalgamation). (A,B,E) = ∗ni=1(Ai,B,Ei). Denote

Ai = a ∈ Ai : Ei[a] = 0 .

Let W∅ = B,W~u = Au(1) ⊗B Au(2) ⊗B . . .⊗B Au(k)

Then

A =∞⊕k=0

⊕|~u|=k

u(1)6=u(2)6=... 6=u(k)

W~u,

and E[1] = 1, E[a1a2 . . . ak] = 0.

Complete positivity of the free product.

Lemma 12.12. Let B be a unital C∗-algebra. A matrix (Bij)ni,j=1 ∈ Mn(B) is positive (as an element in

the C∗-algebra) if and only if

∀ b1, b2, . . . , bn ∈ B,n∑

i,j=1

b∗iBijbj ≥ 0.

Proposition 12.13. Let B be a unital C∗-algebra and E : A → B be a B-bimodule map. E is positive ifand only if it is completely positive.

Proof. One direction is clear. Assume E is a positive B-bimodule map. Let A ∈Mn(A). Without loss ofgenerality A = B∗B, B ∈Mn(A), i.e.

Aij =n∑k=1

b∗kibkj, buv ∈ A.

Then using notation from Definition 3.21,

(En[A])ni,j=1 =n∑k=1

(E[b∗kibkj])ni,j=1 ,

so ∀ b1, . . . , bn ∈ B, ∑b∗i E[b∗kibkj]bj =

n∑k=1

E

[n∑i=1

(bkibi)∗

n∑j=1

(bkjbj)

]≥ 0 in B.

123

Page 126: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Theorem 12.14. Free product of completely positive maps is completely positive.

The proof is the same as in Theorem 3.26.

Operator-valued free cumulants.

Let X1, . . . , Xn ∈ (A,B,E). Their moment functional is a C-multilinear functional

M [b0X1b1, X2b2, . . . , Xnbn] = b0M [X1, b1X2, . . . , bn−1Xn]bn = E[b0X1b1X2b2 . . . Xnbn].

Define the free cumulant functionals

R[b0X1b1, X2b2, . . . , Xnbn] = b0R[X1, b1X2, . . . , bn−1Xn]bn

byM [b0X1b1, X2b2, . . . , Xnbn] =

∑π∈NC(n)

Rπ[b0X1b1, X2b2, . . . , Xnbn].

Here Rπ uses the nesting structure on NC(n). In particular, this will not work for general partitions.

Example 12.15. Figure omitted. Let π = (16)(235)(4)(78). Then

Rπ[b0X1b1, X2b2, X3b3, X4b4, X5b5, X6b6, X7b7, X8b8]

= b0R[X1b1R[X2b2, X3b3R[X4]b4X5]b5X6]b6R[X7b7, X8]b8.

Theorem 12.16. Free independence is equivalent to the vanishing of mixed free cumulants.

Definition 12.17. Free convolution. Free cumulants satisfy Rµν = Rµ +Rν .

Operator-valued semicircular distributions.

B-valued free cumulants take values in B. In particular,

RX2 [b] = E[XbX] = variance

is a completely positive map RX2 : B → B. Thus: let η : B → B be a completely positive map. The

functional µ : B〈x〉 → B defined by

Rµ[x, b1x, b2x, . . . , bn−1x] =

η(b1), n = 2,

0, n 6= 2

is the B-valued semicircular distribution with variance η.

124

Page 127: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Proposition 12.18. µ is positive.

Proof I. Free central limit theorem.

Proof II. Fock space (module) realization.

Example 12.19. The moments of a semicircular distribution are

E[b0] = b0, E[b0Xb1] = 0,

E[b0Xb1Xb2] = b0η(b1)b2,

E[b0Xb1Xb2Xb3Xb4] = b0η(b1)b2η(b3)b4 + b0η(b1η(b2)b3)b4,

etc.

The full Fock module construction.

LetM be a B-bimodule. A B-valued inner product is a map

〈·, ·〉 :M×M→ B

such that〈b1fb2, gb3〉 = b∗3 〈f, b∗1g〉 b2

(compare with Remark 12.8),〈f, g〉∗ = 〈g, f〉 ,

〈f, f〉 ≥ 0 in B.

The full Fock module F(M) is the B-bimodule

F(M) =∞⊕k=0

M⊗Bk = B ⊕M⊕ (M⊗BM)⊕ (M⊗BM⊗BM)⊕ . . . ,

with the inner product

〈f1 ⊗ . . .⊗ fn, g1 ⊗ . . .⊗ gk〉 = δn=k 〈gk 〈gk−1 〈. . . 〈g1, f1〉 , f2〉 . . . , fk−1〉 , fk〉

Remark 12.20. If the inner product on M is positive, it is completely positive, which implies that theinner product on F(M) is positive.

125

Page 128: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Now specialize: LetM = principal B-bimodule,M = BξB, ξ a symbol. Then

F ' B ⊕ BξB ⊕ BξBξB ⊕ . . . ' B〈ξ〉.

A fixed completely positive map η : B → B gives an inner product onM:

〈b1ξb2, b3ξb4〉 = b∗4η(b∗3b1)b2.

On F(M), get the creation operator

`∗(b1ξb2ξ . . . bn−1ξbn) = ξb1ξb2ξ . . . bn−1ξbn

and the annihilation operator

`(b1ξb2ξ . . . bn−1ξbn) = η(b1)b2ξ . . . bn−1ξbn,

`(b1) = η(b1)1 ∈ B. If we define X = `∗ + `, then with respect to the conditional expectation 〈·1, 1〉, Xhas the B-valued semicircular distribution with variance η.

Convolution powers.

Definition 12.21. µ : B〈x〉 → B is -infinitely divisible if for all t ≥ 0, tRµ = Rµt for some positive µt.In that case write µt = µt. But remember: Rµt ∈ B. So can ask: if η : B → B is completely positive, isthere a µη such that

Rµη = η(Rµ)?

Theorem 12.22.

a. For any c.p. µ and c.p. η, µ]η is well defined.

b. If µ is -infinitely divisible (i.e. µt ≥ 0), then automatically µη ≥ 0 for any c.p. η.

c. For any c.p. µ and c.p. η, µ(1+η) ≥ 0.

Proof. (a), (b) proved using a Fock module construction. For (c), Method I (A, Belinschi, Fevrier, Nica2011) is to define the map Bη combinatorially as in Definition 9.10, prove that it is c.p., and define

µ(1+η) = (Bη[µ])](1+η) .

Method II (Shlyakhtenko 2012) is to use an operator-valued Voiculescu’s operator model from Proposi-tion 6.20.

126

Page 129: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Relation between different operator-valued cumulants.

If D ⊂ B ⊂ A, what is the relation between D and B-valued free cumulants? Between D and B-valuedfreeness?

Theorem 12.23. Let 1 ∈ D ⊂ B ⊂ A, (A,B,E), (B,D, E) n.c. probability spaces. Let a1, . . . , ak ∈ A.Suppose that for all d1, . . . , dn−1 ∈ D,

RB[au(1)d1, au(2)d2, . . . , au(n)

]∈ D.

ThenRB[au(1)d1, au(2)d2, . . . , au(n)

]= RD

[au(1)d1, au(2)d2, . . . , au(n)

]Theorem 12.24. Let 1 ∈ D ⊂ B,N ⊂ A, (A,B,E), (B,D, E) n.c. probability spaces. Suppose E :B → D is faithful, i.e.

∀b2 ∈ B, E[b1b2] = 0 ⇒ b1 = 0.

Then the following are equivalent.

a. B and N are free with amalgamation over D.

b. For any n1, . . . , nk ∈ N and b1, . . . , bk−1 ∈ B,

RB [n1b1, n2b2, . . . , nk−1bk−1, nk] = RD [n1E[b1], . . . , nk−1E[bk−1], nk]

Remark 12.25 (Applications). Suppose X, Y ∈ (A, ϕ) are not free. How to compute µX+Y ? Sometimeshave (A,B,E), ϕ = ϕ E, and X, Y are free with respect to E. Then can compute

µBX+Y : B〈x〉 → B

by µBX+Y = µBX µBY . Then can also compute

µX+Y [xn] = ϕ(µBX+Y [xn]

)∈ C.

127

Page 130: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Chapter 13

A very brief survey of analytic results.

See [VDN92], [Voi02].

Another combinatorial topic we did not cover is the connection between free probability and asymptoticrepresentation theory of the symmetric and unitary groups, starting with the work of Biane (1995, 1998)and continued by Feray, Lassalle, Sniady and many others.

13.1 Complex analytic methods.

Let µ be an arbitrary probability measure on R. Its Cauchy transform is

Gµ(z) =

∫R

1

z − xdµ(x).

Lemma 13.1. Gµ is an analytic function

Gµ : C+ → C− ∪ R,

where C+ = z ∈ C : =z > 0.

Proof.

=Gµ(z) =

∫R= 1

z − xdµ(x) =

∫R

−=z|z − x|2

dµ(x).

Remark 13.2. Gµ is invertible in a Stolz anglez ∈ C+ : |z| > α,=z > β |<z|

.

G−1µ is defined on

z ∈ C+ : |z| < α,=z > β |<z|.

Figure omitted. On such a domain, define the R-transform by

Rµ(z) = G−1µ (z)− 1

z.

128

Page 131: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Theorem 13.3. For µ, ν arbitrary probability measures, can define µ ν, such that

Rµ +Rν = Rµν

on a domain as above. Key point: µ ν is again positive.

Exercise 13.4. Let γt : t ≥ 0 be the free convolution semigroup of semicircular distributions; recallthat Rγt(z) = tz. Let µ be an arbitrary probability measure, and denote

G(z, t) = Gµγt(z).

Prove Voiculescu’s evolution equation:

∂tG(z, t) +G(z, t) ∂zG(z, t) = 0.

What is the corresponding result for a general free convolution semigroup νt : t ≥ 0?

Multiplicative free convolution

Let µ, ν be probability measure on R+, X, Y freely independent, unbounded operators with distributionsµ, ν. Define the multiplicative free convolution by

µ ν = µX1/2Y X1/2 = µY 1/2XY 1/2 .

Or: Let µ, ν be probability measure on T, X, Y freely independent, unitary operators with distributionsµ, ν. Define

µ ν = µXY .

How to compute? Define

ψµ(z) =

∫R+ or T

xz

1− xzdµ(x) = Mµ(z)− 1,

χµ(z) = ψ−1µ (z) (on a domain), and the S-transform

Sµ(z) =1 + z

zχµ(z).

ThenSµν(z) = Sµ(z)Sν(z).

But note: constructed a multivariate R-transform. No good multivariate S-transform, although there areattempts.

129

Page 132: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Example 13.5. Letµ = (1− α)δ0 + αδ1, ν = (1− β)δ0 + βδ1.

Recall: if p, q are projections, ϕ[p] = α, ϕ[q] = β, in Proposition 8.4 computed the distribution of pqpusing R-transforms. Using S-transforms:

ψµ(z) = αz

1− z, χµ(z) =

z

α + z, Sµ(z) =

1 + z

α + z.

Thus

Sµν(z) =(1 + z)2

(α + z)(β + z),

χµν(z) =z(1 + z)

(α + z)(β + z),

ψµν(z) =1− z(α + β) +

√(az − 1)(bz − 1)

2(z − 1),

wherea, b = α + β − 2αβ ±

√4αβ(1− α)(1− β).

Then G(z) = 1z

(1 + ψ

(1z

))gives

Gµν(z) =1

z+z − (α + β) +

√(z − a)(z − b)

2(1− z)z,

and can find µ ν using Stieltjes inversion.

Free infinite divisibility

Recall: a free convolution semigroup is a family of probability measures µt : t ≥ 0 such that

Rµt = tRµ.

Recall: if the moments of µt are finite, have

r1[µt] = tλ, rn[µt] = tmn−2[ρ],

in other words

Rµt(z) = t

(λ+ ρ

[∞∑n=2

xn−2zn−1

])= t

(λ+

∫R

z

1− xzdρ(x)

),

where ρ is a finite positive measure. This formula still holds for general µ with finite variance (Kol-mogorov representation). In general, have a free Levy-Khinchin representation

Rµt(z) = t

(λ+

∫R

z + x

1− xzdρ(x)

).

130

Page 133: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

It follows thatID =

νλ,ρ : λ ∈ R, ρ finite Borel measure

.

Can also define νλ,ρ∗ via its Fourier transform

Fνλ,ρ∗ (θ) = exp

(iλθ +

∫R(eixθ − 1− ixθ)x

2 + 1

x2dρ(x)

).

There are precisely all the classically infinitely divisible distributions. But moreover,

Theorem 13.6 (Bercovici, Pata 1999). Let µn : n ∈ N be probability measures, and k1 < k2 < . . .integers. The following are equivalent:

a. µn ∗ µn ∗ . . . ∗ µn︸ ︷︷ ︸kn

→ νλ,ρ∗ weakly.

b. µn µn . . . µn︸ ︷︷ ︸kn

→ νλ,ρ weakly.

Recall however, that there does not exist a bijection T such that

T (µ ∗ ν) = T (µ) T (ν).

The theorem above has many generalizations, to ], , non-identically distributed case, etc.

An alternative description of -infinitely divisible distributions is that they are precisely those µ whoseR-transforms Rµ extend to an analytic function C+ → C+.

Stable distributions

µ is -stable if µ µ is µ up to a shift and rescaling. For example, for µ semicircular,

µ µ = D√2µ.

Have a complete description. Again, -stable distributions are in a bijection with ∗-stable distributions,and their domains of attraction coincide.

Except for Gaussian/semicircular, stable distributions do not have finite moments.

Example 13.7. Define µ by Rµ(z) = α, α ∈ C. Note µ µ = D2µ, so µ is stable.

G−1µ (w) =

1

w+ α, Gµ(z) =

1

z − α.

If α is real, then µ = δα. But if α is purely imaginary, α = ia, then

= 1

z − ia=

a−=z(<z)2 + (=z − a)2

131

Page 134: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

anddµ(x) =

1

π

a

x2 + a2dx.

These are the Cauchy distributions. They have a very special property that

µ µ = µ ∗ µ;

in particular they are both and ∗-stable.

13.2 Free entropy.

The description in this section is very brief and imprecise, and is meant to give only a very vague idea ofthe field. It should not be used as a reference.

Microstates free entropy

Remark 13.8 (Bolzmann entropy). A macrostate of the system (say, ideal gas) is described by somemacro parameters, such as temperature, pressure, or volume. A microstate of the system is the completedescription of the positions and velocities of all molecules in the system. Entropy of a macrostate measuresthe amount of microstates compatible with it.

Remark 13.9. In this remark we give a heuristic derivation of the classical entropy formula. Let µ be ameasure with density f , and let F be its cdf, i.e. the primitive of f . Assume that µ is supported in aninterval [a, b]. Fix n, and for 0 ≤ i ≤ n let ai = F−1(i/n). Now look at all the atomic measures

1

n(δx1 + δx2 + . . .+ δxn)

with ai−1 ≤ xi ≤ ai.

On the one hand, both the k-th moment of this measure and of µ lie[1

n

n−1∑i=0

aki ,1

n

n∑i=1

aki

].

Therefore the difference between them is at most (1/n)(akn − ak0) = (1/n)(bk − ak). So for a fixed k, thiscan be made arbitrarily small by choosing sufficiently big n.

On the other hand, the volume of the set of these atomic measures is

Vn =n∏i=1

(ai − ai−1) =n∏i=1

(F−1

(i

n

)− F−1

(i− 1

n

))

132

Page 135: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

and so its logarithm isn∑i=1

log

(F−1

(i

n

)− F−1

(i− 1

n

))Now

F−1

(i

n

)− F−1

(i− 1

n

)≈ 1

n(F−1)′

(i

n

),

so the above expression is approximately

log Vn ≈n∑i=1

log((F−1)′)

(i

n

)− n log n.

Therefore as n goes to infinity,

1

nlog(Vn) + log(n)→

∫log((F−1)′(x))dx = −

∫log(f(F−1(x)))dx = −

∫log(f(x))dF (x).

See (Voiculescu 1993) for the corresponding heuristics in the single-variable free entropy case.

Remark 13.10 (Gaussian maximization). Now look at all the atomic measures with the second momentequal to 1, that is,

∑ni=1 x

2i = n. The volume of this set is related to the area of the sphere of radius√

n in Rn, An = 2πn/2Γ(n/2)−1n(n−1)/2. The corresponding normalized logarithmic volume convergesto 1

2log(2π) + 1

2, which is precisely the entropy of the standard gaussian distribution. Thus (1) almost

every such atomic measure is an approximant to the gaussian distribution, and (2) any other distributionwith variance 1 has entropy smaller than the Gaussian. Compare with the properties of the semicirculardistribution below.

Definition 13.11. Given X1, . . . , Xn ∈ (A, τ) in a tracial n.c. probability space. What is the volume ofthe set of N ×N self-adjoint matrices whose joint distribution is close to µX1,...,Xn? Define the matricialmicrostates

Γ(X1, . . . , Xn;m,N, ε)

=A1, . . . , An ∈M sa

N×N(C) :∣∣τ [Xu(1) . . . Xu(p)

]− tr

[Au(1) . . . Au(p)

]∣∣ < ε for p ≤ m

(cutoff omitted for simplicity). The free entropy of (X1, . . . , Xn) with respect to τ is

χ(X1, . . . Xn) = infm∈N

infε>0

lim supN→∞

(1

N2log vol Γ(X1, . . . , Xn;m,N, ε) +

n

2logN

).

Remark 13.12. Is Γ(X1, . . . , Xn;m,N, ε) always non-empty? The question is equivalent to the Connesembedding problem.

Remark 13.13. If could replace the lim supN→∞ by limN→∞, many results would follow.

Remark 13.14 (Properties of χ).

133

Page 136: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

a. (One-variable formula) If µ = µX ,

χ(X) =

∫∫log |s− t| dµ(s) dµ(t) +

3

4+

1

2log 2π,

minus logarithmic energy.

b. χ(X1, . . . , Xn) is either finite or −∞.

c. (Subadditivity)

χ(X1, . . . , Xm+n) ≤ χ(X1, . . . , Xm) + χ(Xm+1, . . . , Xm+n).

d. (Free independence) Let χ(Xj) > −∞, 1 ≤ j ≤ n. Then

χ(X1, . . . , Xn) = χ(X1) + . . .+ χ(Xn)

if and only if X1, . . . , Xn are freely independent.

Remark: Know this only for individual variables, not for subsets.

e. Normalize τ [X2j ] = 1, 1 ≤ j ≤ n. Then χ(X1, . . . , Xn) is maximal if and only if X1, . . . , Xn are

free semicircular.

Free entropy dimension

Definition 13.15. (X1, . . . , Xn) ∈ (A, τ). S1, . . . , Sn free semicircular free from X1, . . . , Xn. The freeentropy dimension is

δ(X1, . . . , Xn) = n+ lim supε↓0

χ(X1 + εS1, . . . , Xn + εSn)

|log ε|.

Many related versions.

Question 13.16. Is χ or δ a von Neumann algebra invariant? That is, if A = W ∗(X1, . . . , Xn) =W ∗(Y1, . . . , Yk), is χ(X1, . . . , Xn) = χ(Y1, . . . , Yk)? Would solve the isomorphism problem. Know (aversion of) δ is an algebra invariant.

Remark 13.17 (Extra properties of free dimension).

a. δ(free semicircular n-tuple) = n.

b. For any set of generators, δ(matrix algebra) = 1− 1n2 .

Question 13.18. If δ(X1, . . . , Xn) = n, does it follow that W ∗(X1, . . . , Xn) ' L(Fn)?

Answer: no (Nate Brown 2005).

134

Page 137: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Remark 13.19 (Major applications).

a. Absence of Cartan subalgebras (Voiculescu 1996): ifM has a Cartan subalgebra, then δ(M) ≤ 1.So L(Fn) does not come from a measurable equivalent relation.

b. Primality (Ge 1998): ifM'M1⊗M2 for infinite-dimensionalM1,M2, then δ(M) ≤ 1. So L(Fn)is prime.

Extensions by Popa and his school.

Non-microstates free entropy

Recall from Definition 7.6: the free difference quotient

∂Xi(Xu(1) . . . Xu(k)) =∑u(j)=i

Xu(1) . . . Xu(j−1) ⊗Xu(j+1) . . . Xu(k).

For X1, . . . , Xn ∈ (A, τ), define their conjugate variables ξ1, . . . , ξn by

(τ ⊗ τ) [∂XiP (X1, . . . , Xn)] = τ [ξiP (X1, . . . , Xn)]

(Schwinger-Dyson equations). Typically ξi is not a polynomial. Sometimes, ξi ∈ L2(A, τ) exists. Thendefine the free Fisher information

Φ∗(X1, . . . , Xn) =n∑i=1

‖ξi‖22 =

n∑i=1

τ [ξ∗i ξi]

(actually, a relative version of it).

Remark 13.20 (Properties).

a. Formula in the single-variable case.

b. Superadditivity.

c. Free additivity for subsets.

d. Minimum achieved if Xi free semicircular.

Define the non-microstates free entropy

χ∗(X1, . . . , Xn) =1

2

∫ ∞0

(n

1 + t− Φ∗(X1 + t1/2S1, . . . , Xn + t1/2Sn)

)dt+

n

2log 2πe,

where Si are free semicircular.

Properties similar to χ. In particular χ(X) = χ∗(X).

Question 13.21. Is χ = χ∗?

Theorem 13.22 (Biane, Capitaine, Guionnet 2003). χ ≤ χ∗.

135

Page 138: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

13.3 Operator algebras.

The description in this section is very brief and imprecise, and is meant to give only a very vague idea ofthe field. It should not be used as a reference.

The von Neumann algebras of all countable, amenable groups with the infinite conjugacy class propertyare the same, namely the hyperfinite II1-factor. Similarly,

W ∗ ((M2(C), tr)⊗∞)' W ∗ ((M3(C), tr)⊗∞

).

This is why a positive answer to the isomorphism problem for L(F2)?' L(F3) is not unreasonable.

Exercise 11.20 allows us to combine free semicircular and circular operators into a single semicircularmatrix, while the results from Lecture 14 of [NS06] allow us to cut up a semicircular element by a familyof matrix units.

Definition 13.23. IfM is a a II1-factor, p ∈M a projection with τ [p] = α, the amplification ofM is

Mα = pMp.

Mα depends only on α and not on p. Can do this also for larger α.

Theorem 13.24. If N ≥ 2, n ≥ 2, L(Fn) 'MN(C)⊗ L(FN2(n−1)+1), so that

(L(Fn))1/N ' L(FN2(n−1)+1).

Idea of proof: realize L(Fn) inside MN(C)⊗ (A, ϕ) for some A.

Theorem 13.25. More generally if p, q ≥ 2 and λ =(p−1q−1

)1/2

, then

(L(Fp))λ ' L(Fq).

Using this idea, Dykema and Radulescu independently defined interpolated free group factors:

L(Ft) : t ∈ (1,∞) , L(Ft) ∗ L(Fs) ' L(Ft+s).

Isomorphism problem: not solved. But know: either all L(Ft), 1 < t ≤ ∞ are the same, or they are alldifferent.

Theorem 13.26 (Dykema). For n, k ≥ 2,

Mn(C) ∗Mk(C) ' L(Fs),

where (1− 1n2 ) + (1− 1

k2) = s. More generally, formulas for free products of finite dimensional and even

hyperfinite algebras. Even with amalgamation.

Much, much more to be said.

136

Page 139: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

Bibliography

[Ans01] Michael Anshelevich, Partition-dependent stochastic measures and q-deformed cumulants,Doc. Math. 6 (2001), 343–384 (electronic). MR1871667 (2004k:46107)

[Ans04] , Appell polynomials and their relatives, Int. Math. Res. Not. (2004), no. 65, 3469–3531. MR2101359 (2005k:33012)

[Bia98] Philippe Biane, Processes with free increments, Math. Z. 227 (1998), no. 1, 143–174.MR2101359 (2005k:33012)

[BN09] Serban T. Belinschi and Alexandru Nica, Free Brownian motion and evolution towards -infinite divisibility for k-tuples, Internat. J. Math. 20 (2009), no. 3, 309–338. MR2500073

[BS91] Marek Bozejko and Roland Speicher, An example of a generalized Brownian motion, Comm.Math. Phys. 137 (1991), no. 3, 519–531. MR2500073

[Fla80] P. Flajolet, Combinatorial aspects of continued fractions, Discrete Math. 32 (1980), no. 2,125–161. MR592851 (82f:05002a)

[GSS92] Peter Glockner, Michael Schurmann, and Roland Speicher, Realization of free white noises,Arch. Math. (Basel) 58 (1992), no. 4, 407–416. MR592851 (82f:05002a)

[Gui09] Alice Guionnet, Large random matrices: lectures on macroscopic asymptotics, Lecture Notesin Mathematics, vol. 1957, Springer-Verlag, Berlin, 2009, Lectures from the 36th ProbabilitySummer School held in Saint-Flour, 2006. MR592851 (82f:05002a)

[Haa97] Uffe Haagerup, On Voiculescu’s R- and S-transforms for free non-commuting random vari-ables, Free probability theory (Waterloo, ON, 1995), Amer. Math. Soc., Providence, RI, 1997,pp. 127–148. MR592851 (82f:05002a)

[NS06] Alexandru Nica and Roland Speicher, Lectures on the combinatorics of free probability, Lon-don Mathematical Society Lecture Note Series, vol. 335, Cambridge University Press, Cam-bridge, 2006. MR2266879 (2008k:46198)

[NSS02] Alexandru Nica, Dimitri Shlyakhtenko, and Roland Speicher, Operator-valued distributions. I.Characterizations of freeness, Int. Math. Res. Not. (2002), no. 29, 1509–1538. MR2266879(2008k:46198)

137

Page 140: Free probability and combinatorics Preliminary versionmanshel/m689/Free... · These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni-versity

[PV10] Mihai Popa and Victor Vinnikov, Non-commutative functions and non-commutative free Levy-Hincin formula, arXiv:1007.1932v2 [math.OA], 2010.

[Spe98] Roland Speicher, Combinatorial theory of the free product with amalgamation and operator-valued free probability theory, Mem. Amer. Math. Soc. 132 (1998), no. 627, x+88.MR2266879 (2008k:46198)

[Sta97] Richard P. Stanley, Enumerative combinatorics. Vol. 1, Cambridge Studies in Advanced Math-ematics, vol. 49, Cambridge University Press, Cambridge, 1997, With a foreword by Gian-Carlo Rota, Corrected reprint of the 1986 original. MR1442260 (98a:05001)

[VDN92] D. V. Voiculescu, K. J. Dykema, and A. Nica, Free random variables, CRM Monograph Series,vol. 1, American Mathematical Society, Providence, RI, 1992, A noncommutative probabilityapproach to free products with applications to random matrices, operator algebras and har-monic analysis on free groups. MR1217253 (94c:46133)

[Vie84] Gerard Viennot, Une theorie combinatoire des polynomes orthogonaux generaux, Univ. Que-bec, Montreal, Que. (unpublished notes), 1984.

[Vie85] , A combinatorial theory for general orthogonal polynomials with extensions and appli-cations, Orthogonal polynomials and applications (Bar-le-Duc, 1984), Lecture Notes in Math.,vol. 1171, Springer, Berlin, 1985, pp. 139–157. MR838979 (87g:42046)

[Voi02] Dan Voiculescu, Free entropy, Bull. London Math. Soc. 34 (2002), no. 3, 257–278.MR838979 (87g:42046)

138