82
Mind your P and Q-symbols: Why the Kazhdan-Lusztig basis of the Hecke algebra of type A is cellular Geordie Williamson An essay submitted in partial fulfillment of of the requirements for the degree of B.A. (Honours) Pure Mathematics University of Sydney S I D E RE · M E N S · E A D E M · M U T A T O October 2003

álgebras de hecke

  • Upload
    pez15

  • View
    233

  • Download
    0

Embed Size (px)

DESCRIPTION

Tesis

Citation preview

Page 1: álgebras de hecke

Mind your P and Q-symbols:

Why the Kazhdan-Lusztig basis of the

Hecke algebra of type A is cellular

Geordie Williamson

An essay submitted in partial fulfillment ofof the requirements for the degree of

B.A. (Honours)

Pure MathematicsUniversity of Sydney

SID

ERE ·MEN

S ·EADE

M·MUTAT

O

October 2003

Page 2: álgebras de hecke
Page 3: álgebras de hecke

To my mother

Page 4: álgebras de hecke
Page 5: álgebras de hecke

Acknowledgements

First and foremost I would like to thank my supervisor Dr. Gus Lehrer. I would not have progressedvery far without his inspiration and support. I would also like to thank Anthony Henderson, BobHowlett and Andrew Mathas for giving up their time for me during the year. On a more personalnote, I would like to thank my family and friends (both on the fourth floor of Carslaw and at 8Northumberland Ave.) for their support and friendship this year. Lastly, I would like to thankJohn Lascelles, Alan Stapledon and Steve Ward for proof reading sections of the essay.

i

Page 6: álgebras de hecke

ii

Page 7: álgebras de hecke

Contents

Acknowledgements i

Introduction v

Preliminaries vii

1 The Symmetric Group 1

1.1 The Length Function and Exchange Condition . . . . . . . . . . . . . . . . . . . . . 1

1.2 Generators and Relations for Symn . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 The Bruhat Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.4 Descent Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Young Tableaux 10

2.1 Diagrams, Shapes and Standard Tableaux . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 The Row Bumping Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 The Robinson-Schensted Correspondence . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4 Partitions Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.5 Growth Diagrams and the Symmetry Theorem . . . . . . . . . . . . . . . . . . . . . 14

2.6 Knuth Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7 Tableau Descent Sets and Superstandard Tableaux . . . . . . . . . . . . . . . . . . . 20

2.8 The Dominance Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 The Hecke Algebra and Kazhdan-Lusztig Basis 26

3.1 The Hecke Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 The Linear Representations of Hn(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3 Inversion in Hn(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.4 An Involution and an anti-Involution . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.5 The Kazhdan-Lusztig Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.6 Multiplication Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

iii

Page 8: álgebras de hecke

4 Cells 39

4.1 Cell Orders and Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.2 Representations Associated to Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.3 Some Properties of the Kazhdan-Lusztig Polynomials . . . . . . . . . . . . . . . . . . 42

4.4 New Multiplication Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.5 New Definitions of the Cell Preorders . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5 The Kazhdan-Lusztig Basis as a Cellular Basis 48

5.1 Cellular Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.2 Elementary Knuth Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.3 The Change of Label Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.4 Left Cells and Q-Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.5 Property A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.6 The Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

A Appendix 65

A.1 An Alternative Proof of Kazhdan and Lusztig’s Basis Theorem . . . . . . . . . . . . 65

A.2 Two-Sided Cells and the Dominance Order . . . . . . . . . . . . . . . . . . . . . . . 67

Bibliography 69

iv

Page 9: álgebras de hecke

Introduction

The Hecke algebras emerge when one attempts to decompose an induced representation of certainfinite matrix groups. In particular, the Hecke algebra of the symmetric group is isomorphic to thealgebra of intertwining operators of the representation of GL(n, q) obtained by inducing the trivialrepresentation from the subgroup of upper triangular matrices. By Schur’s Lemma, the restrictionof any intertwining operator to an irreducible representation must be scalar. Hence the problem ofdecomposing the induced representation is equivalent to decomposing the associated Hecke algebra.

However, the representation theory of the Hecke algebra is difficult. In 1979 Kazhdan and Lusztig[17] introduced a special basis upon which a generating set for the Hecke algebra acts in an easilydescribed manner. In order to construct the representations afforded by this new basis the notion ofcells was introduced and it was shown that each cell corresponds to a representation of the algebra.In general these representations are not irreducible. However Kazhdan and Lusztig showed that,in the case of the Hecke algebra of the symmetric group, their construction does yield irreduciblerepresentations. In order to prove this Kazhdan and Lusztig introduced and studied the so-called‘star operations’ on elements of the symmetric group.

It is implicit in Kazhdan and Lusztig’s paper that these star operations (and the equivalence classeswhich they generate) have a deep combinatorial significance. This significance can be explainedin terms of the Robinson-Schensted correspondence. In 1938 Robinson [25] showed that, to everypermutation, one can associate a pair of ‘standard tableaux’ of the same shape. Then in 1961Schensted [26] showed that this map was a bijection. In 1970 Knuth [20] studied the equivalenceclasses of the symmetric group corresponding to a fixed left or right tableau and showed that twopermutations are equivalent if and only if they can be related by certain basic rearrangements knownas ‘Knuth transformations’. The amazing thing is that these Knuth transformations are preciselythe star operations of Kazhdan and Lusztig. Hence, much of the algebraic theory developed byKazhdan and Lusztig can be described combinatorially, using the language of standard tableaux.

Since a multiplication table for the Hecke algebra of a Weyl group was first written down by Iwahori[16] in 1964, many algebras similar to the Hecke algebra have been discovered. In almost all casestheir representation theory is approached by using analogous techniques to the original methodsof Kazhdan and Lusztig. The similarities between these algebras, in particular the multiplicativeproperties of a distinguished basis, led Graham and Lehrer [12], in 1996, to define a ‘cellular algebra’.This definition provides an axiomatic framework for a unified treatment of algebras which possessa ‘cellular basis’.

In the paper in which Graham and Lehrer defined a cellular algebra, the motivating example wasthe Hecke algebra of the symmetric group. However, there is no one source that explains why theHecke algebra of the symmetric group is a cellular algebra. The goal of this essay is to fill this gap.However, we are not entirely successful in this goal. We must appeal to a deep theorem of Kazhdanand Lusztig to show that a certain degenerate situation within a cell cannot occur (Kazhdan andLusztig prove this using geometric machinery beyond the scope of this essay).

The structure of this essay is as follows. In Chapter 1 we gather together some fundamental conceptsassociated to the symmetric group including the length function, the Bruhat order and descent sets.

v

Page 10: álgebras de hecke

We also discover a set of generators and relations for the symmetric group. This is intended tomotivate the introduction of the Hecke algebra by generators and relations in Chapter 3.

Chapter 2 provides a self-contained introduction to the calculus of tableaux. Most of the results,including the Robinson-Schensted correspondence, the Symmetry Theorem and Knuth equivalenceare fundamental to the subject and are present in any text on standard tableaux (possibly withdifferent proofs). Towards the end of the chapter we introduce the tableau descent set and super-standard tableau. These are less well-known but are fundamental to our arguments in Chapter 5.

Chapters 3 and 4 are an introduction to Kazhdan-Lusztig theory in the special case of the Heckealgebra of the symmetric group. In Chapter 3 we define the Hecke algebra and then prove theexistence and uniqueness of the Kazhdan-Lusztig basis. In Chapter 4 we introduce the cells as-sociated to the basis of an algebra and show how they lead to representations. The rest of thechapter is then dedicated to deriving the original formulation of the cell preorders due to Kazhdanand Lusztig. Although we will not make it explicit, all of the material of Chapters 3 and 4 can beproved without alteration in the more general case of the Hecke algebra of a Coxeter group.

In Chapter 5 we define a cellular algebra and then combine the results of Chapters 2, 3 and 4 withthe aim of showing that the Kazhdan-Lusztig basis is cellular. We first show how the elementaryKnuth transformations can be realised algebraically and then work towards a complete descriptionof the cells in terms of the left and right tableau of the Robinson-Schensted correspondence. We alsoshow that the representations afforded by left cells within a given two-sided cell are isomorphic.However, as mentioned above, we cannot complete our proof that the Kazhdan-Lusztig basis iscellular entirely via elementary means: we must appeal to a theorem of Kazhdan and Lusztig toshow that all left cells within a two-sided cell are incomparable in the left cell preorder.

The appendix contains two sections. The first gives an elegant alternative proof of the existenceand uniqueness of the Kazhdan-Lusztig basis due to Lusztig [19]. In the second section we discussthe relationship between the dominance order and two-sided cell order.

vi

Page 11: álgebras de hecke

Preliminaries

All rings and algebras are assumed to be associative and have identity. All homomorphisms be-tween rings or algebras should preserve the identity. A representation of an R-algebra H is ahomomorphism φ : H → EndM for some R-module M . Since homomorphisms must preserve theidentity all representations map the identity of H to the identity endomorphism of M . We willrefer without comment to the equivalence between H-modules and representations.

A preorder is a reflexive and transitive relation. That is, if ≤ is a preorder on a set X then x ≤ xfor all x ∈ X and if x ≤ y ≤ z then x ≤ z. A preorder need not satisfy antisymmetry and soit is possible to have x ≤ y and y ≤ x with y 6= x. If ≤ is a preorder or partial order on a setX, a subset of the relations is said to generate the order ≤ if all relations are either consequencesof reflexivity or follow via transitivity from the relations of the subset. For example, the relations{i ≤ i + 1|i ∈ Z} generate the usual order on Z.

vii

Page 12: álgebras de hecke

viii

Page 13: álgebras de hecke

1 The Symmetric Group

This is an introductory chapter in which we gather results about the symmetric group. We provethe strong exchange condition and give a set of generators and relations for the symmetric group.We also introduce the Bruhat order and descent sets.

1.1 The Length Function and Exchange Condition

Let Symn be the symmetric group on {1, 2, . . . , n} acting on the left. Let id be the identity,S = {(i, i + 1)|1 ≤ i < n} the set of simple transpositions and T = {(i, j)|1 ≤ i < j ≤ n} be thetranspositions. Throughout, si will always denote the simple transposition (i, i + 1) interchangingi and i+1 whereas r, u and v will be used to denote arbitrary simple transpositions. If w ∈ Symn,we can write w = w1w2 . . . wn where w(i) = wi for all i. This is the string form of w. We will usethis notation more frequently than cycle notation.

Given w ∈ Symn the length of w, denoted `(w), is the number of pairs i < j such that w(i) > w(j).Thus `(w) = 0 if and only if w is the identity. Our first result shows how the length of an elementw ∈ Symn is effected by multiplication by a simple transposition:

Lemma 1.1.1. If w ∈ Symn and sk = (k, k + 1) ∈ S then:(i) `(wsk) = `(w) + 1 if w(k) < w(k + 1)(ii) `(wsk) = `(w)− 1 if w(k) > w(k + 1)

Proof. Write N(w) = {(i, j)|i < j, w(i) > w(j)} so that `(w) = |N(w)|. Now assume that w(k) <w(k + 1). If (p, q) ∈ N(w) then wsk(sk(p)) = w(p) > w(q) = wsk(sk(q)) and so (sk(p), sk(q)) ∈N(wsk) so long as sk(p) < sk(q). But if sk(p) > sk(q) then we must have (p, q) = (k, k + 1)contradicting w(k) < w(k + 1). Hence, if (p, q) ∈ N(w) then (sk(p), sk(q)) ∈ N(wsk). Similarly, if(p, q) ∈ N(wsk) and (p, q) 6= (k, k+1) then (sk(p), sk(q)) ∈ N(w). By assumption, w(k) < w(k+1)and so (k, k + 1) ∈ N(wsk). Hence |N(wsk)| = |N(w)|+ 1. Thus `(wsk) = `(w) + 1 and hence (i).

For (ii) note that if w(k) > w(k + 1) then wsk(k) < wsk(k + 1) and we can apply (i) to wsk toconclude that `(w) = `(ws2

k) = `(wsk) + 1 and hence (ii).

We will soon see that the simple transpositions generate Symn and so, given w ∈ Symn, we canwrite w = r1r2 . . . rm with ri ∈ S. This is called an expression for w. The expression is reduced ifthe number of simple transpositions used is minimal.

Proposition 1.1.2. The simple transpositions generate Symn. Moreover, if w ∈ Symn then anexpression for w is reduced if and only if it contains `(w) simple transpositions.

Proof. We first show, by induction on `(w), that it is possible to express w using `(w) simpletranspositions. If `(w) = 0 then w is the identity and the result is clear (using the conventionthat the empty word is the identity). Now, if `(w) > 0 then there exists i, j such that i < j butw(i) > w(j). Hence there exists k such that w(k) > w(k + 1). Now from Lemma 1.1.1 above wehave `(wsk) = `(w) − 1 and so we can apply induction to write wsk = r1r2 . . . rm for some ri ∈ Swith m = `(w)−1. Hence w = r1r2 . . . rmsk is an expression for w using `(w) simple transpositions.

Now let r1r2 . . . rm be a reduced expression for w. Then, from above, we know that m ≤ `(w).However, by repeated application of Lemma 1.1.1 we have `(w) = `(r1r2 . . . rm) ≤ m and so

1

Page 14: álgebras de hecke

`(w) = m. Conversely, we have seen that there exists a reduced expression for w using `(w) simpletranspositions and hence any expression with `(w) simple transpositions is reduced.

Using this new interpretation of the length function we have the following:

Proposition 1.1.3. If w ∈ Symn then `(w) = `(w−1). In particular, if r1r2 . . . rm is a reducedexpression for w then rmrm−1 . . . r1 is a reduced expression for w−1.

Proof. Let r1r2 . . . rm be a reduced expression for w so that `(w) = m. Then rmrm−1 . . . r1 is anexpression for w−1 and so `(w−1) ≤ `(w). But `(w) = `((w−1)−1) ≤ `(w−1) and so `(w) = `(w−1).Hence rmrm−1 . . . r1 is a reduced expression for w−1 since it is an expression for w−1 and contains`(w−1) terms.

The following are some useful congruences:

Lemma 1.1.4. Let x, y ∈ Symn, t ∈ T be a transposition and r, ri ∈ S be simple transpositions:(i) `(xr) ≡ `(x) + 1 (mod 2)(ii) `(r1r2 . . . rm) ≡ m (mod 2)(iii) `(xy) ≡ `(x) + `(y) (mod 2)(iv) `(t) ≡ 1 (mod 2)

In particular `(xt) 6= `(x) for all x ∈ Symn and t ∈ T .

Proof. We get (i) upon reduction of Lemma 1.1.1 modulo 2, and (ii) follows by repeated applicationof (i) and the fact that `(id) = 0. If u1u2 . . . up is an expression for x and v1v2 . . . vq is an expressionfor y then, by (ii), `(xy) = `(u1 . . . upv1 . . . vq) ≡ p + q ≡ `(x) + `(y) (mod 2) and hence (iii). For(iv) note that if t = (i, j) ∈ T then t = sisi+1 . . . sj−2sj−1sj−2 . . . si and so we can express t using anodd number of simple transpositions. Hence `(t) ≡ 1 (mod 2) by (ii). The last statement followssince `(xt) ≡ `(x) + 1 (mod 2) by (iii) and (iv).

We now come to a theorem of fundamental importance:

Theorem 1.1.5. (Strong Exchange Condition) Let w ∈ Symn and choose an expression r1r2 . . . rm

for w. If t = (i, j) ∈ T is such that `(wt) < `(w) then there exists a k such that wt =r1r2 . . . rk . . . rm (where denotes ommision).

Proof. We first show that `(wt) < `(w) implies that w(i) > w(j). As in Lemma 1.1.1 define N(w) ={(p, q)|p < q,w(p) > w(q)}. Now assume that w(i) < w(j) and define a function ϕ : N(w)→ N×Nby:

ϕ(p, q) =

{(t(p), t(q)) if t(p) < t(q)(p, q) if t(p) > t(q)

We claim that Im(ϕ) ⊂ N(wt) and ϕ is injective. It then follows that `(w) = |N(w)| ≤ |N(wt)| =`(wt).

We first verify that Im(ϕ) ⊂ N(wt). If t(p) < t(q) then wt(t(p)) = w(p) > w(q) = wt(t(q)) andhence (t(p), t(q)) ∈ N(wt) if t(p) < t(q). On the other hand if t(p) > t(q) then since t = (i, j) andw(i) < w(j) (so (i, j) /∈ N(w)) we must have either p = i and q = j but not both. If p = i thenwe have wt(p) = w(j) > w(i) = w(p) > w(q) = wt(q) and hence (p, q) ∈ N(wt). Similarly, if q = jthen wt(p) = w(p) > w(q) = w(j) > w(i) = wt(q) and so (p, q) ∈ N(wt).

2

Page 15: álgebras de hecke

To see that ϕ is injective we argue by contradiction. So assume that (p, q) 6= (p′, q′) and thatϕ(p, q) = ϕ(p′, q′). We may assume without loss of generality that t(p) < t(q) and that t(p′) > t(q′).Since p′ < q′ and t(p′) > t(q′) we must have either p′ = i and i < q′ < j or q′ = j and i < p′ < j.If p′ = i and i < q′ < j then since t(p) = p′ = i we have p = j. Hence j < q. But q′ < j andso we have a contradiction since q = q′. On the other hand if q′ = j and i < p′ < j then sincet(q) = q′ = j we have q = i. Hence p < q = i and we obtain a similar contradiction.

The above arguments show that if w(i) < w(j) then `(w) ≤ `(wt). Hence w(i) > w(j) since`(w) > `(wt). Now write up = rprp+1 . . . rm. Then if um(i) > um(j) then rm(i) > rm(j) andso j = i + 1 and rm = t = (i, i + 1). The result then follows with k = m since r2

m = 1. Soassume that um(i) < um(j). Then, since u1(i) = w(i) > w(j) = u1(j) there exists a k such thatuk(i) > uk(j) but uk+1(i) < uk+1(j). Hence uk+1(i) < uk+1(j) but rk(uk+1(i)) > rk(uk+1(j)) andso rk = (uk+1(i), uk+1(j)) and we have:

wt = r1r2 . . . rk−1(uk+1(i), uk+1(j))uk+1(i, j) = r1r2 . . . rk−1uk+1 = r1r2 . . . rk . . . rm

We also have a left-hand version of the exchange condition: If w = r1r2 . . . rm ∈ Symn and t ∈ Twith `(tw) < `(w) then we have `(w(w−1tw)) < `(w) and so we can apply the exchange conditionto w and w−1tw (since w−1tw ∈ T ) to get that tw = r1r2 . . . rk . . . rm for some k.

The exchange condition has two important corollaries:

Corollary 1.1.6. (The Deletion Condition) Let w ∈ Symn and let r1r2 . . . rm be an expressionfor w with `(w) < m. Then there exists p and q such that w = r1 . . . rp . . . rq . . . rm. Moreover, areduced expression for w can be obtained from r1r2 . . . rm by deleting an even number of terms.

Proof. Since `(w) < m we have `(r1r2 . . . rq−1) > `(r1r2 . . . rq) for some q. Choose such a q. Thenthe exchange condition applies to the pair r1r2 . . . rq−1 and rq so that there exists a p such thatr1r2 . . . rq = r1r2 . . . rp . . . rq−1. Hence w = r1 . . . rp . . . rq . . . rm. If w is unreduced `(w) is less thanthe number of terms. We also have that the number of terms is congruent to `(w) modulo 2 byLemma 1.1.4(ii). Hence we can keep deleting two terms at a time until we have `(w) terms. ByProposition 1.1.2 such an expression is reduced.

Corollary 1.1.7. If w ∈ Sym and r ∈ S then `(wr) < `(w) if and only if w has a reducedexpression ending in r.

Proof. Let w = r1r2 . . . rm be a reduced expression for w. Then if `(wr) < `(w) there exists a ksuch that wr = r1r2 . . . rk . . . rm by the exchange condition. Hence w = r1r2 . . . rk . . . rmr. Sincethis expression has m = `(w) terms it is reduced by Proposition 1.1.2. The other implication isclear.

1.2 Generators and Relations for Symn

The aim of this section is to write down a set of generators and relations for the symmetric group.We have already seen that the simple transpositions generate the symmetric group. It is easilyverified that the following relations hold:

s2i = 1

sisi+1si = si+1sisi+1

sisj = sjsi if |i− j| ≥ 2

3

Page 16: álgebras de hecke

The second two relations are called the braid relations. It turns out that these relations determineall others in the symmetric group, however this will take a little while to prove.

We first prove what is in fact a slightly stronger “universal property” of the symmetric group. Ourproof follows Dyer [4] closely.

Proposition 1.2.1. Suppose ϕ is a function from the set of simple transpositions S ⊂ Symn to amonoid M such that:

ϕ(si)ϕ(si+1)ϕ(si) = ϕ(si+1)ϕ(si)ϕ(si+1)ϕ(si)ϕ(sj) = ϕ(sj)ϕ(si) if |i− j| ≥ 2

Then there is a unique ϕ extending ϕ to a map from Symn to M such that:

ϕ(w) = ϕ(r1)ϕ(r2) . . . ϕ(rm)

whenever r1r2 . . . rm is a reduced expression for w.

The key to the proof is the following lemma:

Lemma 1.2.2. Suppose u1u2 . . . um and v1v2 . . . vm are reduced expressions for w ∈ Symn such thatϕ(u1)ϕ(u2) . . . ϕ(um) 6= ϕ(v1)ϕ(v2) . . . ϕ(vm). Suppose further that, for any two reduced expressionsr1r2 . . . rk = t1t2 . . . tk with k < m we have ϕ(r1)ϕ(r2) . . . ϕ(rk) = ϕ(t1)ϕ(t2) . . . ϕ(tk). Then u1 6= v1

and v1u1 . . . um−1 = u1u2 . . . um but ϕ(v1)ϕ(u1) . . . ϕ(um−1) 6= ϕ(u1)ϕ(u2) . . . ϕ(um).

Proof. Since u1u2 . . . um = v1v2 . . . vm we have:

v1u1 . . . um = v2 . . . vm (1.2.1)

Now the right hand side of (1.2.1) has m−1 terms and hence `(v1u1 . . . um) ≤ m−1 < m+1. Hencethe exchange condition applies to v1 and u1u2 . . . um and there exists a j such that v1u1 . . . um =u1 . . . uj . . . um. Some cancellation and rearranging yields:

v1u1 . . . uj−1 = u1u2 . . . uj (1.2.2)

Now, assume for contradiction that:

ϕ(v1)ϕ(u1) . . . ϕ(uj−1) = ϕ(u1) . . . ϕ(uj) (1.2.3)

Then, by (1.2.2) v1v2 . . . vm = u1u2 . . . um = v1u1 . . . uj . . . um (again denotes omission) and so:

v2v3 . . . vm = u1 . . . uj . . . um (1.2.4)

Now both sides of (1.2.4) are reduced and have length less than m and so, by the conditions of thelemma:

ϕ(v2)ϕ(v3) . . . ϕ(vm) = ϕ(u1) . . . ϕ(uj) . . . ϕ(um) (1.2.5)

But left multiplying by ϕ(v1) and using (1.2.3) yields:

ϕ(v1)ϕ(v2) . . . ϕ(vm) = ϕ(u1)ϕ(u2) . . . ϕ(um)

This contradicts our assumption that ϕ(v1)ϕ(v2) . . . ϕ(vm) 6= ϕ(u1)ϕ(u2) . . . ϕ(um). Hence:

ϕ(v1)ϕ(u1) . . . ϕ(uj−1) 6= ϕ(u1)ϕ(u2) . . . ϕ(uj) (1.2.6)

Now if j < m then both sides of v1u1 . . . uj−1 = u1 . . . uj have length less than m and so theconditions of the lemma force the opposite of (1.2.6). Hence j = m and the result follows by (1.2.2)and (1.2.6). Note that u1 6= v1 since v1u1 . . . um−1 = u1 . . . um and u1 . . . um is reduced.

4

Page 17: álgebras de hecke

We can now give the proof:

Proof of Proposition 1.2.1. To show that ϕ exists and is unique it is enough to show that

ϕ(v1)ϕ(v2) . . . ϕ(vm) = ϕ(u1)ϕ(u2) . . . ϕ(um)

whenever u1u2 . . . um and v1v2 . . . vm are two reduced expressions for some w ∈ Symn. We provethis by contradiction. So assume that there exists some w ∈ Symn with two reduced expressionsu1u2 . . . um = v1v2 . . . vm but ϕ(u1)ϕ(u2) . . . ϕ(um) 6= ϕ(v1)ϕ(v2) . . . ϕ(vm). Moreoever, assumethat w has minimal length amongst all such elements (so that if `(v) < `(w) then ϕ(v) is welldefined). Since w has minimal length the conditions of the above lemma are satisfied and so wehave v1 6= u1 and

v1u1u2 . . . um−1 = u1u2 . . . um but ϕ(v1)ϕ(u1) . . . ϕ(um−1) 6= ϕ(u1)ϕ(u2) . . . ϕ(um) (1.2.7)

Now the lemma applies to (1.2.7) to yield:

u1v1u1u2 . . . um−2 = v1u1 . . . um−1 but ϕ(u1)ϕ(v1)ϕ(u1)ϕ(u2) . . . ϕ(um−2) 6= ϕ(v1)ϕ(u1) . . . ϕ(um−1)

Continuing in this fashion yields:u1v1u1 . . .︸ ︷︷ ︸

m

= v1u1v1 . . .︸ ︷︷ ︸m

(1.2.8)

But:ϕ(u1)ϕ(v1)ϕ(u1) . . .︸ ︷︷ ︸

m

6= ϕ(v1)ϕ(u2)ϕ(v1) . . .︸ ︷︷ ︸m

(1.2.9)

It is easily verified that if u1 = (i, i + 1) and v1 = (j, j + 1) then u1v1 has order 3 if |i− j| = 1 and2 if |i− j| ≥ 2. Hence if |i− j| = 1 by (1.2.8) we must have that m is a multiple of 3 so that (1.2.9)contradicts the first relation in the proposition. On the other hand if |i− j| ≥ 2 we must have thatm is a multiple of 2 which contradicts the second relation. Hence if u1u2 . . . um and v1v2 . . . vm aretwo reduced expressions for w then

ϕ(u1)ϕ(u2) . . . ϕ(um) = ϕ(v1)ϕ(v2) . . . ϕ(vm)

and so we can define ϕ(w) = ϕ(u1)ϕ(u2) . . . ϕ(um) and the result follows.

This result has the following important corollary:

Corollary 1.2.3. If u1u2 . . . um and v1v2 . . . vm are two reduced expressions for some elementw ∈ Symn then it is possible to obtain one from the other using only the braid relations.

Proof. Let M be the monoid on generators mi with 1 ≤ i < n subject to the relations:

mimi+1mi = mi+1mimi+1 (1.2.10a)mimj = mjmi if |i− j| ≥ 2 (1.2.10b)

Now define ϕ : S →M by ϕ(si) = mi. By construction ϕ satisfies the hypotheses of the propositionand hence ϕ : Symn → M exists. Now let u1u2 . . . um and v1v2 . . . vm be two reduced expressionsfor w ∈ Symn. Then:

ϕ(u1)ϕ(u2) . . . ϕ(um) = ϕ(w) = ϕ(v1)ϕ(v2) . . . ϕ(vm)

5

Page 18: álgebras de hecke

Hence it is possible to obtain ϕ(v1)ϕ(v2) . . . ϕ(vm) from ϕ(u1)ϕ(un) . . . ϕ(um) using only the rela-tions in (1.2.10). However, the relations in (1.2.10) are direct copies of the braid relations in Symn

and so ϕ(u1)ϕ(u2) . . . ϕ(um) = ϕ(v1)ϕ(v2) . . . ϕ(vm) if and only if it is possible to obtain v1v2 . . . vm

from u1u2 . . . um in Symn using only the braid relations.

As promised, we now have:

Theorem 1.2.4. The symmetric group is generated by the simple transpositions subject to therelations:

s2i = 1 (1.2.11a)

sisi+1si = si+1sisi+1 (1.2.11b)sisj = sjsi if |i− j| ≥ 2 (1.2.11c)

Proof. We argue that any relation of the form r1r2 . . . rm = 1 with ri ∈ S is a consequence of thegiven relations. We induct on m, with the case m = 0 being obvious. So assume that r1r2 . . . rm = 1holds in Symn. Then since `(1) = 0 we cannot have `(r1r2 . . . rk) < `(r1r2 . . . rkrk+1) for allk. Fix the smallest k with `(r1r2 . . . rk) > `(r1r2 . . . rk+1) so that r1r2 . . . rk is reduced. Lettingx = r1r2 . . . rk we can apply Corollary 1.1.7 to conclude that x has a reduced expression u1u2 . . . uk

with uk = rk+1. Now r1r2 . . . rk and u1u2 . . . uk are both reduced expressions for x and so, fromthe above corollary, we can obtain u1u2 . . . uk from r1r2 . . . rk using only the braid relations. Hencewe can use the relations to write:

r1r2 . . . rm = r1r2 . . . rkrk+1 . . . rm = u1u2 . . . ukrk+1 . . . rm = u1u2 . . . uk−1rk+2 . . . rm

The last step follows since uk = rk+1 and so ukrk+1 = u2k = 1 by relation (i). Hence, using the

relations, we have reduced the number of terms in the relation by 2 and we can conclude thatr1r2 . . . rm = 1 is a consequence of the given relations by induction on m.

1.3 The Bruhat Order

In this section we introduce a useful partial order on Symn. If v, w ∈ Symn with `(v) < `(w)write v . w if there exists t ∈ T such that v = wt. We then write u < w if there exists a chainu = v1 . v2 . · · · . vm = w. We write u ≤ w if u < w or u = w. The resulting relation is clearlyreflexive, anti-symmetric and transitive and hence is a partial order. It is called the Bruhat Order.

It is an immediate consequence of the definition that if r ∈ S then either wr < w or wr > w. Thefollowing is another useful property of the Bruhat order:

Lemma 1.3.1. If v, w ∈ Symn with v ≤ w and r ∈ S, then either vr ≤ w or vr ≤ wr (or both).

Proof. First assume that v . w with `(wr) > `(vr). Since v . w there exists t ∈ T with v = wt.Hence vr = wr(rtr) with `(vr) < `(wr) (by assumption) and rtr ∈ T . Hence vr . wr.

The other possibility if v . w is `(wr) ≤ `(vr). Since v = wt for some t ∈ T , `(w) − `(v) ≡ 1(mod 2) (by Lemma 1.1.4). Since `(wr) = `(w)± 1 and `(vr) = `(v)± 1 the only way we can have`(wr) ≤ `(vr) is if `(w) = `(v) + 1, `(wr) = `(w)− 1 and `(vr) = `(v) + 1. Now since `(wr) < `(w)Corollary 1.1.7 applies and we can conclude that w has a reduced expression si1si2 . . . sim such thatsim = r. We have v = wt with `(v) < `(w) and so the exchange condition applies and there exists

6

Page 19: álgebras de hecke

a k such that v = wt = si1si2 . . . sik . . . sim . But `(vr) > `(v) and sim = r and so we must havek = m. Hence v = si1si2 . . . sim−1 . But then vr = w and so vr ≤ w.

Now if v ≤ w then either v = w or there exists a chain v = u1 . u2 . · · · . un = w. If v = w thenvr = wr and so the result is obvious. On the other hand if there exists such a chain then fromabove we have either u1r ≤ u2 or u1r ≤ u2r. If u1r ≤ u2 then vr = u1r ≤ u2 . · · · . un = w andso vr ≤ w. If u1r ≤ u2r then either u2r ≤ u3 or u2r ≤ u3r. Continuing in this manner we obtaina new chain vr ≤ u2r ≤ . . . which either ends in w or wr. Hence vr ≤ w or vr ≤ wr.

The definition of the Bruhat order given above makes it immediately clear that the resulting relationis a partial order. However it is possible to give a more practical criterion. If r1r2 . . . rm is a reducedexpression for some w ∈ Symn, a subexpression of r1r2 . . . rm is an expression ri1ri2 . . . ris wherei1, i2, . . . is is a subsequence of 1, 2, . . . ,m satisfying i1 < i2 · · · < is. In other words, ri1ri2 . . . ris isa subexpression of r1r2 . . . rm if it is possible to obtain ri1ri2 . . . ris from r1r2 . . . rm by ‘crossing out’various terms. The following proposition shows that the Bruhat order can be entirely characterisedin terms of subexpressions:

Proposition 1.3.2. Let u, v ∈ Symn. Then u ≤ v in the Bruhat order if and only if an expressionfor u can be obtained as a subexpression of some reduced expression for v.

Proof. If u = v then the if and only if condition is clear. So assume that u < v. Then there existsa sequence u = w1 . w2 . · · · . wm = v with `(wi) < `(wi+1) for all i and wi = wi+1ti for someti ∈ T . Now let si1si2 . . . sim be a reduced expression for v and tm ∈ T such that wm−1 = vtm.Then the exchange condition applies and so there exists a k such that wm−1 = si1si2 . . . sik . . . sim .We can repeat this argument with wm−1 in place of wm to conclude that there exists a k′ suchthat wm−2 = si1si2 . . . sik . . . sik′ . . . sim (or si1si2 . . . sik′ . . . sik . . . sim). Continuing in this fashionwe obtain an expression for u as a subexpression of si1si2 . . . sim .

For the other implication assume that uj1uj2 . . . ujp is a subexpression of a reduced expressionu1u2 . . . um (so that uk = sik for some sequence ik). We use induction on m to show thatuj1uj2 . . . ujp ≤ u1u2 . . . um (the cases m = 0 and m = 1 being obvious). If jp < m then by inductionuj1uj2 . . . ujp ≤ u1u2 . . . um−1. Now `(u1u2 . . . um−1) < `(u1u2 . . . um) since u1u2 . . . um is reducedand u1u2 . . . um−1 = (u1u2 . . . um)um and so u1u2 . . . um−1 ≤ u1u2 . . . um. Hence uj1uj2 . . . ujp ≤u1u2 . . . um. If jp = m then uj1uj2 . . . ujp−1 ≤ u1u2 . . . um−1 by induction and by Lemma 1.3.1 thereare two possibilities: uj1uj2 . . . ujp ≤ u1u2 . . . um−1 ≤ u1u2 . . . um or uj1uj2 . . . ujp ≤ u1u2 . . . um. Ineither case we have uj1uj2 . . . ujp ≤ u1u2 . . . um.

This alternative description yields a useful corollary:

Corollary 1.3.3. Let u, v ∈ Symn. Then u ≤ v if and only if u−1 ≤ v−1.

Proof. If u ≤ v then, from above, there exists a reduced expression r1r2 . . . rm for v with u =ri1ri2 . . . rim occurring as a subexpression. But u−1 = rimrim−1 . . . ri1 and rmrm−1 . . . r1 is a reducedexpression for v−1 by Proposition 1.1.3 and so u−1 occurs as a subexpression of a reduced expressionfor v−1. Hence u−1 ≤ v−1.

We finish this section with a useful lemma:

Lemma 1.3.4. Suppose that x, y ∈ Symn and that there exists r ∈ S such that xr < x and yr < y.Then x ≤ y if and only if xr ≤ yr.

7

Page 20: álgebras de hecke

Proof. If xr < yr then either x ≤ yr ≤ y or x ≤ y by Lemma 1.3.1. On the other hand ifx ≤ y we can fix a reduced expression r1r2 . . . rm for y such that rm = r (since yr < y). Now,since x ≤ y we can use the above characterisation of the Bruhat order to conclude that thereexists i1 < i2 < · · · < ik such that x = ri1ri2 . . . rik . Now, if ik = m then xr = ri1ri2 . . . rik−1

is a subexpression or yr = r1r2r3 . . . rm−1 and so xr ≤ yr. On the other hand if ik 6= m thenri1ri2 . . . rik = x is a subexpression of r1r2r3 . . . rm−1 = yr and so xr ≤ x ≤ yr.

1.4 Descent Sets

We finish this chapter with a discussion of the left and right descent sets of a permutation. Thoughstraightforward, this is a concept that will emerge repeatedly in what follows.

Given a permutation w ∈ Symn we define the left descent set and right descent set of a permutationw ∈ Symn to be the sets L(w) = {r ∈ S |rw < w} and R(w) = {r ∈ S |wr < w} respectively.For example, in Sym3 we have L(id) = R(id) = ∅, L(s1s2) = s1, R(s1s2) = s2 and L(s1s2s1) =R(s1s2s1) = {s1, s2}. We can give an alternative characterisation of the right descent set in termsof the string form:

Lemma 1.4.1. Let w = w1w2 . . . wn ∈ Symn. Then si ∈ R(w) if and only if wi > wi+1.

Proof. This is immediate from Lemma 1.1.1: we have that `(wsi) < `(w) (and so wsi < w) if andonly if wi = w(i) > w(i + 1) = wi+1.

The following gives a relation between the left and right descent sets:

Lemma 1.4.2. Let w ∈ Symn. Then R(w) = L(w−1).

Proof. Note that if wr ≤ w then wr < w since wr 6= w. Hence the statement wr ≤ w is equivalentto wr < w. Now let r ∈ S. Then, by Corollary 1.3.3, wr ≤ w if and only if (wr)−1 = rw−1 ≤ w−1.Hence r ∈ R(w) if and only if r ∈ L(w−1).

The last lemma of this section is useful in later arguments:

Lemma 1.4.3. Suppose that r ∈ S and w ∈ Symn.(i) If rw > w then R(rw) ⊃ R(w).(ii) If wr > w then L(wr) ⊃ L(w).

Proof. For (i) note that if si ∈ R(w) then w has a reduced expression ending in si by Corollary1.1.7. Since rw > w left multiplying such an expression by r yields a reduced expression for rw.Hence si ∈ R(rw). For, (ii) note that rw−1 > w−1 by Corollary 1.3.3 and hence L(w) = R(w−1) ⊂R(rw−1) = L(wr) (using Lemma 1.4.2 above).

1.5 Notes

1. All the proofs of the first section are, as far as we are aware, original. They are motivated byinterpreting the symmetric group as a group of diagrams on 2n dots which group multiplica-tion given by concatenation. For example, under this interpretation, our original definitionof the length of a permutation w counts the number of crossings in a diagram of w.

8

Page 21: álgebras de hecke

2. The “universal property” of the symmetric group proved in Section 1.2 is stated in the moregeneral case of Coxeter groups in Dyer [4]. We follow Dyer’s argument closely.

3. The definition of the Bruhat order, as well as the proof of the equivalence to the more familiarsubexpression condition, is adapted from an elegant argument of Humphreys [15].

4. Almost all the results in this section, including the strong exchange condition, the universalproperty and the definition of the Bruhat order occur in the much more general setting ofCoxeter groups. See, for example, Bourbaki [3] or Humphreys [15].

9

Page 22: álgebras de hecke

2 Young Tableaux

In this section we introduce Young Tableaux, which offer a means to discuss the combinatorics ofthe symmetric group. The central result of the section is the Robinson-Schensted correspondence,which gives a bijection between elements of the symmetric group and pairs of standard tableaux ofthe same shape. We also prove the symmetry theorem and introduce Knuth equivalence, tableaudescent sets and the dominance order on partitions.

2.1 Diagrams, Shapes and Standard Tableaux

A partition is a weakly decreasing sequence λ = (λ1, λ2, . . . ) with finitely many non-zero terms.The λi’s are the parts of λ. The weight of a partition, denoted |λ|, is the sum of the parts. If|λ| = n we say that λ is a partition of n. The length of a partition λ, denoted l(λ), is the number ofnon-zero parts. We will often denote a partition λ by (λ1, λ2, . . . , λm) where m = l(λ) and λi = 0for i > l(λ). Given a partition λ of n there is an associated Ferrer’s diagram (or simply diagram)of λ consisting of l(λ) rows of boxes in which the ith row has λi boxes. For example, λ = (5, 4, 2)is a partition of 11 and its associated diagram is:

If λ is a partition of n, a tableau of shape λ is a filling of the diagram of λ with positive integerswithout repetition such that the entries increase from left to right along rows and from top tobottom down columns. If T is a tableau we write Shape(T ) for the underlying partition. Thetableau is standard if the entries are precisely {1, 2, . . . n}. For example, the following are tableaux,with T standard:

S =

1 2 7 93 846

T =1 3 4 52 67

We have Shape(S) = (4, 2, 1, 1) and Shape(T ) = (4, 2, 1). The size of a tableau is the number ofboxes. Entries of a tableau are indexed by their row and column numbers (with rows indexed fromtop to bottom and columns from left to right). For example, with T as above we have T13 = 4.

2.2 The Row Bumping Algorithm

Given a tableau T and a positive integer x not in T the Row Bumping Algorithm provides a meansof inserting x into T to produce a new tableau denoted T ← x. The algorithm is as follows: If x isgreater than all the elements of the first row then a new box is created right of the first row and xis entered. Otherwise, x “bumps” the first entry greater than x into the second row. That is, thesmallest r is located such that T1r > x and then the entry x2 of T1r is replaced by x. The sameprocedure is repeated in the second row with x2 in place of x. This process is repeated row by rowuntil a new box is created.

For example, with S as above, inserting 5 into S bumps the 7 from the first row, which in turn

10

Page 23: álgebras de hecke

bumps the 8 into the third row in which a new box is created:

1 2 7 93 846

← 5 =

1 2 5 93 74 86

A row insertion T ← x determines a bumping sequence, bumping route and new box : the bumpingsequence is the sequence x = x1 < x2 < · · · < xp of entries which are bumped from row to row; thebumping route is the set of locations in which bumping takes place (or alternatively the locationof elements of the bumping sequence in T ← x); and the new box is the location of the boxcreated by inserting x. In the example above the bumping sequence is 5, 7, 8, the bumping route is{(1, 3), (2, 2), (3, 2)} and the new box is (3, 2).

If (i, j) and (k, l) are locations in a tableau, (i, j) is strictly left of (k, l) if j < l and weakly left ifj ≤ l. Similarly (i, j) is strictly below (k, l) if i < k and weakly below if i ≤ k. If x and y are entriesin a tableau we say that x is left (or below) y if this is true of their locations. The following is atechnical but important lemma:

Lemma 2.2.1. (Row Bumping Lemma) The bumping route of T ← x moves to the left. That is,if xi and xi+1 are elements in the bumping sequence then xi+1 is weakly left of xi. Furthermore,if x < y then the bumping route of T ← x is strictly left of the bumping route of (T ← x) ← yand the bumping route of (T ← y) ← x is weakly left of the bumping route of T ← y. Hence thenew box of T ← x is strictly left and weakly below the new box of (T ← x)← y and the new box of(T ← y)← x is strictly below and weakly left of the new box of T ← y.

Proof. If the new box of T ← x is in the first row then there is nothing to prove. So assume thatthe new box of T ← x is not in the first row. Let (i, j) be an element of the bumping route ofT ← x such that (i, j) is not equal to the new box of T ← x. Let y be the element bumped fromrow i. Then, if Ti+1,y is an entry of the tableau then y < Ti+1,y (since Ti+1,j is immediately belowy in T) and so y bumps an element to the left of (i + 1, j). On the other hand, if Ti+1,j is not anentry of the tableau then y either creates a new box left of (i, j) or bumps an entry left of (i, j)since, in this case, all locations of the (i + 1)st row are left of (i, j). Hence y bumps an element orcreates a new box to the left of (i, j). Repeating this argument row by row shows that the bumpingroute moves to the left.

Let x = x1, x2, . . . , xm be the bumping sequence of T ← x, let y = y1, y2, . . . , yn be the bumpingsequence of (T ← x)← y and let p be the minimum of m and n. Since x < y the entry bumped byx must be less than the entry bumped by y. Hence x2 < y2. Repeating this row by row shows thatxi < yi for all i ≤ p. Since xi and yi are in the same row for all i ≤ p the bumping route of T ← xis strictly left of the bumping route of (T ← x) ← y. This implies that the new box of T ← x isweakly below and strictly left of the new box of (T ← x)← y.

Now let y1, y2, . . . , yk be the bumping sequence of T ← y, let x1, . . . xj be the bumping sequence of(T ← y)← x and let p be as above. Again, assume p > 1. Now x either bumps an entry to the leftof y (in which case x2 < y1 < y2) or x bumps y (in which case x2 = y < y2). We can repeat thisargument from row to row to conclude that xi < yi for all i ≤ p. Since we might have xi+1 = yi

for some i ≤ p we can only conclude that the bumping sequence of T (← y) ← x is weakly left ofT ← y. Hence the new box of (T ← y) ← x is strictly below and weakly left of the new box ofT ← y.

11

Page 24: álgebras de hecke

Since the bumping route moves to the left we know that T ← x has the shape of a partition. Also,if x = x1 < x2 < · · · < xn is the bumping sequence and xi is in the (i, j)th position in T then weknow that xi is greater than all the entries to the left of (i, j) in the ith row. Since the bumpingroute moves to the left we know, in T ← x, that xi lies to the left of (i, j) in the (i + 1)st row.Hence the entry immediately above xi in T ← x is less than xi. These observations confirm that ifT is a tableau then so is T ← x.

We now state another technical property of the row bumping algorithm. If T is a tableau (or apartition) an outside corner of T is a box (i, j) such that neither (i + 1, j) nor (i, j + 1) are boxesof T . Deleting an outside corner produces another tableau. If the largest entry of T is n then nmust lie at the end of a row and at the base of a column and hence must occupy an outside corner.Hence deleting the largest entry from a tableau always produces a tableau.

Lemma 2.2.2. Let T be a tableau containing a maximal element n and let x < n be such that xis not in T . Then inserting x into T and then deleting n yields the same tableau as deleting n andthen inserting x.

Proof. If n is not an element of the bumping sequence x = x1 < x2 < · · · < xm of T ← x then theresult is clear. If n is an an element of the bumping sequence then we must have n = xm since n ismaximal. Now let T ′ be the tableau obtained from T by deleting n. Then the bumping sequenceof T ′ ← x is x1 < x2 < · · · < xm−1. Hence T ← x and T ′ ← x only differ by the box containing nand the result follows.

2.3 The Robinson-Schensted Correspondence

A word without repetitions (from now on simply word) is a sequence of positive integers withoutrepetition. If w = w1w2 . . . wi is a word there is a corresponding tableau, denoted ∅ ← w, obtainedby successively inserting the elements of w starting with the empty tableau. In symbols:

∅ ← w = (. . . ((∅ ← w1)← w2) . . . )← wn

For example if w = 4125 then:

∅ ← w = (((∅ ← 4)← 1)← 2)← 5 = (( 4 ← 1)← 2)← 5 = ( 14← 2)← 5 = 1 2

4← 5 = 1 2 5

4

The Robinson-Schensted Correspondence gives a correspondence between permutations and pairs ofstandard tableaux of the same shape. Given w = w1w2 . . . wn ∈ Symn the algorithm is as follows:Let P (0) = Q(0) = ∅ and recursively define P (i) and Q(i) by:

1) P (i+1) = P (i) ← wi+1

2) Q(i+1) is obtained from Q(i) by adding a box containing i + 1 in the location of the new boxof P (i) ← wi+1.

Thus P (n) = ∅ ← w and Q(n) records the order in which the new boxes of P (n) are created. We thenset P (w) = P (n) and Q(w) = Q(n). P (w) is referred to as the P -symbol or insertion tableau andQ(w) is referred to as the Q-symbol or recording tableau. We write w ∼ (P (w), Q(w)) to indicatethe w corresponds to (P (w), Q(w)) under the Robinson-Shensted correspondence.

12

Page 25: álgebras de hecke

For example if w = 45132 we have:

P (1) = 4 P (2) = 4 5 P (3) = 1 54

P (4) = 1 34 5

P (5) =1 23 54

Q(1) = 1 Q(2) = 1 2 Q(3) = 1 23

Q(4) = 1 23 4

Q(5) =1 23 45

Hence w = 45132 ∼ (1 23 54

,1 23 45

).

Note from the construction Q(i) always has the same shape as P (i). Also, since Q(i+1) is obtainedfrom Q(i) by adding i + 1 either in a new row at the bottom or at the end of a row it is clearthat Q(i) is always a standard tableau. Lastly, since w is a permutation w = w1w2 . . . wn consistsof all the integers {1, 2, . . . , n} and so P is standard. Hence, given a permutation w ∈ Symn

the Robinson-Schensted correspondence does indeed yield a pair of standard tableaux of the sameshape.

Theorem 2.3.1. The Robinson-Schensted correspondence between elements w ∈ Symn and pairs(P,Q) of standard tableaux of size n and the same shape is a bijection.

Proof. Notice that, if we are given T ← x together with the location of the new box of T ← x wecan recover T and x uniquely. We start at the new box (i, j) and label its entry xi. If i = 1 thenx = x1 and T is the tableau obtained by deleting the last box of the first row of T ← x. Otherwisewe have i > 1 and so xi must have been bumped by an element in the row above. Now the onlyelement which could have bumped xi is the largest entry smaller than xi. Label this entry xi−1.Continuing in this fashion we regain the bumping sequence x1 < x2 < · · · < xi. Hence we havex = x1 and T is the tableau formed by shifting each xi up one row into the location previouslyoccupied by xi−1. This process is known as the reverse bumping algorithm.

Hence, given (P,Q) we fix the location (i, j) of n in Q. Then the remarks above show that there isonly one possible P (n−1) and xn such that both P = P (n−1) ← xn and the new box of P (n−1) ← xn

is (i, j). Deleting the (i, j)th element of Q we obtain Q(n−1). Repeating this argument with P (n−1)

and Q(n−1) in place of P and Q we uniquely obtain P (n−2), Q(n−2) and xn−1. Each time we repeatthis procedure we remove one of the elements from P (i) and obtain a new pair of tableau P (i−1),Q(i−1) and a uniquely determined integer xi. Upon completion we are left with a string x1x2 . . . xn

of entries of P in some order. Since at every iteration the xi is uniquely determined we conclude thatx1x2 . . . xn is the only possible permutation corresponding to (P,Q). Hence we have demonstratedan inverse with domain all pairs of standard tableaux of size n and the same shape and so theRobinson-Schensted correspondence is a bijection.

2.4 Partitions Revisited

One of the most beautiful properties of the Robinson-Schensted correspondence is the symmetrytheorem which we will prove using the concept of the growth diagram. However, before we canintroduce the growth diagram we need to develop some notation surrounding partitions.

There are a number of set theoretic concepts associated with partitions which can be inter-preted intuitively by considering the corresponding diagrams. If λ = (λ1, λ2, . . . , λn) and µ =

13

Page 26: álgebras de hecke

(µ1, µ2, . . . , µm) are partitions then we define (λ ∪ µ)i = max{λi, µi} for 1 ≤ i ≤ max{m,n}. Forexample:

if λ = and µ = then λ ∪ µ =

We say that µ ⊆ λ if m ≤ n and µi ≤ λi for all i ≤ m. Geometrically, µ ⊆ λ means that thediagram of µ “sits inside” λ. Clearly if λ ⊆ µ and λ and µ both partition the same n then λ = µ.If µ ⊆ λ then the skew diagram, denoted λ\µ, is the diagram obtained by ommiting the squares ofµ from λ:

if µ = and λ = then λ\µ =

Now call a chain of partitions ∅ = λ0 ⊂ λ1 ⊂ · · · ⊂ λn saturated if each skew diagram λi+1\λi con-tains precisely one square. There is an obvious bijection between saturated chains of partitions ∅ =λ0 ⊂ λ1 ⊂ · · · ⊂ λn and standard tableaux of size n: we simply number the squares of λn in the orderin which they appear in the chain. For example the chain ∅ ⊂ ⊂ ⊂ ⊂ ⊂ ⊂

corresponds to the tableau1 2 43 65

. Also, since the largest element of a tableau must occupy an out-

side corner (see the remarks prior to Lemma 2.2.2) it is clear that the shape of T with the largestelement removed is also the diagram of a partition. Hence all tableaux can be associated with asaturated chain of partitions by taking the shapes of tableaux obtained by successively deleting thelargest entry.

With the correspondence between chains of partitions and standard tableaux we can give alternatedescriptions of the P and Q-symbols of a permutation:

Lemma 2.4.1. Let w = w1w2 . . . wn ∈ Symn. Let P (i) = ∅ ← w1w2 . . . wi be as in Section 2.3.Also, let w(j) be the word obtained from w by omitting elements greater than j. Then:

(i) Shape(∅ ← w(1)) ⊂ Shape(∅ ← w(2)) ⊂ · · · ⊂ Shape(∅ ← w(n)) corresponds to P (w).(ii) Shape(P (1)) ⊂ Shape(P (2)) ⊂ · · · ⊂ Shape(P (n)) corresponds to Q(w).

Proof. Let i be the location of n in w so that w = w1w2 . . . wi−1nwi+1 . . . wn. Then an immediatecorollary of Lemma 2.2.2 is that if T = ∅ ← w1w2 . . . wi−1n then deleting n from T and theninserting wi+1 . . . wn produces the same result as inserting wi+1 . . . wn and then deleting n. Hencethe P -symbol of w with n deleted is the same as ∅ ← w(n− 1). Hence the box containing n in P isthe unique box in Shape(∅ ← w(n))\Shape(∅ ← w(n− 1)). Hence we have (i) by induction on n.

In Section 2.3 we defined we defined Q(i+1) to be obtained from Q(i) be adding i + 1 in thenew box of P (i) ← wi+1. Hence Q(1) corresponds to ∅ ⊂ Shape(P (1)), Q(2) corresponds to ∅ ⊂Shape(P (1)) ⊂ Shape(P (2)) etc. Hence Q(w) corresponds to ∅ ⊂ Shape(P (1)) ⊂ Shape(P (2)) ⊂· · · ⊂ Shape(P (n)).

2.5 Growth Diagrams and the Symmetry Theorem

We now describe the construction of the growth diagram. Given a permutation w = w1w2 . . . wn ∈Symn we can associate an n × n array of boxes in which we label the box in the ith column andwth

i row with an X (with rows indexed from bottom to top and columns from left to right). Forexample if w = 452613 ∈ Sym6 then the associated array looks like:

14

Page 27: álgebras de hecke

XX

XX

XX

4 5 2 6 1 3

Now define the (i, j)th partial permutation, denoted w(i, j), to be the word formed by reading offthe column number of each X below and left of (i, j) from left to right. By convention, if either ior j is 0 then w(i, j) = ∅. In our example the (5, 4)th partial permutation is 452:

XX

XX

XX

4 5 2 6 1 3

Now define the growth diagram of w to be the n × n array with X’s inserted as above, in whichthe (i, j)th entry contains the shape of the tableau formed by row-bumping the (i, j)th partialpermutation into the empty set. It is useful to also include the base and left hand side of the array(that is, those locations with i = 0 or j = 0) in the growth diagram and label them with the emptyset. Using the convention mentioned above (that w(0, j) = w(i, 0) = ∅) we have that the (i, j)th

entry is Shape(∅ ← w(i, j)) for 0 ≤ i, j ≤ n.

In our example we have ∅ ← 452 = 2 54 and so the (5, 4)th entry of the growth diagram of w is .

The full growth diagram looks like:

X∅

X∅

X∅ ∅ ∅

X∅ ∅ ∅

X∅ ∅ ∅ ∅ ∅

X

∅ ∅ ∅ ∅ ∅ ∅

15

Page 28: álgebras de hecke

Given the growth diagram of a permutation w ∈ Symn we can immediately recover the P and Q-symbols of w. If r(i, j) denotes the diagram in the ith row and jth column then by Lemma 2.4.1 wehave that r(0, n) ⊂ r(1, n) ⊂ · · · ⊂ r(n, n) corresponds to P (w) and r(n, 0) ⊂ r(n, 1) ⊂ . . . r(n, n)corresponds to Q(w).

Notice that, in our example, the partitions “grow” upwards and rightwards. That is, if λ is aboveand right of µ then µ ⊆ λ. This is true in general and is the subject of the following lemma:

Lemma 2.5.1. Let r(i, j) denote the partition appearing in the ith row and jth column of the growthdiagram of w ∈ Symn. Then if i ≤ k and j ≤ l then r(i, j) ⊆ r(k, l).

Proof. It is enough to show that r(i, j) ⊆ r(i + 1, j) and r(i, j) ⊆ r(i, j + 1) for all 1 ≤ i, j ≤ n.If w(i, j) = w1w2 . . . wk then w(i, j + 1) = w(i, j) or w(i, j + 1) = w1w2 . . . wks for some s ≤ idepending on whether there is an X in the same column and below (i, j + 1). In the first caser(i, j) = r(i, j + 1) and in the second case we have:

r(i, j) = Shape(∅ ← w1w2 . . . wk) ⊂ Shape(∅ ← w1w2 . . . wks) = r(i, j + 1)

Hence, r(i, j) ⊆ r(i, j + 1) for all 1 ≤ i, j ≤ n.

As above, if w(i, j) = w1w2 . . . wk then w(i+1, j) = w(i, j) or w(i+1, j) = w1w2 . . . wl(i+1)wl+1 . . . wk

for some l ≤ j depending on whether there is an X in the same row and left of (i+1, j). In the firstcase we have r(i + 1, j) = r(i, j). In the second case an immediate corollary to Lemma 2.2.2 givesus that ∅ ← w1w2 . . . wk is equal to ∅ ← w1w2 . . . wl(i + 1)wl+1 . . . wk with i + 1 deleted. Hencer(i, j) can be obtained from r(i + 1, j) by removing a box and so r(i + 1, j) ⊂ r(i, j).

Now consider the diagram r(i, j) which appears in the ith row and jth column. We want to developrules which relate r(i, j) to the diagrams to the left and below r(i, j) so that the growth diagramcan be constructed inductively. To simplify notation we will let ρ = r(i, j), µ = r(i, j − 1), λ =r(i− 1, j − 1) and ν = r(i− 1, j):

i · · · µ ρ

· · · λ ν

......j

Assume first that the (i, j)th square does not contain an X. Then there are four possibilities(throughout “column below” refers to the set of boxes below and in the same column as ρ and “rowto the left” refers to those boxes to the left and in the same row as ρ):

1) There are no X’s in the column below or the row to the left. Clearly λ = ν = µ and ρ = λ.2) There is an X in the column below but not in the row to the left. In this case clearly ρ = ν.

Here the (i, j − 1)th partial permutation is the (i − 1, j)th partial permutation truncated byone and so µ ⊂ ν. Hence we can equally well write ρ = ν ∪ µ.

3) There is an X in the row to the left but not in the column below. Here the (i− 1, j)th partialpermutation is the (i, j − 1)th word truncated by one and so ν ⊂ µ. As in (2) we haveρ = µ ∪ ν.

16

Page 29: álgebras de hecke

4) There is an X both in the column below and in the row to the left. First assume that µ 6= ν.Then we know that µ ⊂ ρ and ν ⊂ ρ and hence ν ∪ µ ⊂ ρ. But we also know that ρ has onlytwo more boxes than λ (by considering the size of the corresponding partial permutations).Since µ 6= ν, µ ∪ ν has two more squares than λ and so we must have ρ = µ ∪ ν.The most difficult case is when µ = ν. Let T (µ), T (λ) and T (ν) be the tableaux given by∅ ← w(i, j − 1), ∅ ← w(i − 1, j − 1) and ∅ ← w(i − 1, j) respectively (that is, they are thetableaux used to construct the growth diagram). We know (from Lemma 2.2.2) that T (µ)with i deleted is equal to T (λ) and, in this case, the shape of T (ν) is equal to the shape ofT (µ). Hence the new box of T (λ) ← wk must be in the same location as i in T (µ). Henceinserting wk into T (µ) places wk in the box previously occupied by i and bumps i into thenext row. Hence if s is the unique integer such that µs = λs + 1 (the row number of i in µ)then ρt = µt if t 6= s + 1 and ρs+1 = µs+1 + 1 (since i is bumped into the next row).

If (i, j) contains an X then clearly µ = λ = ν. If w(i − 1, j − 1) = w1w2 . . . wk then w(i, j) =w1w2 . . . wki and hence the shape of ∅ ← w1w2 . . . wki is the shape of ∅ ← w1w2 . . . wk with one boxadded to the first row (since i is greater than all the wi). Hence ρi = λi when i 6= 1 and ρ1 = λ1 +1.

Hence we have the following “local rules” for ρ based only on λ, µ and ν:1) If the (i, j)th square does not contain an X and λ = µ = ν then ρ = λ.2) If the (i, j)th square does not contain an X and µ 6= ν then ρ = µ ∪ ν.3) If the (i, j)th square does not contain an X and λ 6= µ = ν then let i be the unique integer

such that µi = λi + 1. Then ρj = µj if j 6= i + 1 and ρi+1 = µi+1 + 1.4) If the (i, j)th square does contain an X then λ = µ = ν and ρj = λj if j 6= 1 with ρ1 = λ1 +1.

The above discussion shows that these rules are equivalent to our previous definition of the growthdiagram. Hence, given a permutation we could equally well construct the array of X’s, label thebase and left hand side with ∅’s and use the local rules to construct the growth diagram.

The proof of this important theorem is now straightforward:

Theorem 2.5.2. (Symmetry Theorem) If w ∼ (P,Q) under the Robinson-Schensted correspon-dence then w−1 ∼ (Q,P ).

Proof. The importance of the local rules is that they are perfectly symmetrical in µ and ν. So if Gis the growth diagram of w, then reflecting G along the line y = x yields the growth diagram of w−1.Hence if r(i, j) corresponds to the (i, j)th partition in G then the P -symbol of w−1 corresponds tor(n, 0) ⊂ r(n, 1) ⊂ · · · ⊂ r(n, n) which is the Q-symbol of w. Similarly the Q-symbol of w−1 isequal to the P -symbol of w.

2.6 Knuth Equivalence

Using the language of tableaux it is possible to split elements of the symmetric group into equiva-lence classes based on their P or Q-symbol. These equivalence classes turn out to be of fundamentalimportance when we come to look at representations of the Hecke algebras. For this reason it isessential to be able to decide when two elements of the symmetric group share a P or Q-symbolwithout calculating their tableaux explicitly. The answer to this involves the notion of Knuthequivalence. Throughout this section we will only concern ourselves with the problem of decidingwhether two permutations share a P -symbol. The theory that we develop below combined withthe symmetry theorem answers the analogous question for Q-symbols.

Let w = w1w2 . . . wm be a word and let {x, y, z} with x < y < z be a set of three adjacent elements

17

Page 30: álgebras de hecke

in some order. An elementary Knuth transformation is a reordering of w using one of the followingrules:

. . . zxy . . .↔ . . . xzy . . .

. . . yxz . . .↔ . . . yzx . . .

(where the rest of w remains unchanged). We say that two words v and w are Knuth equivalent,and write u ≡ w, if one can be transformed into the other by a series of the elementary Knuthtransformations. The following proposition shows that Knuth equivalent words share the sameP -symbol.

Proposition 2.6.1. Let u and v be words with u ≡ v. Then P (u) = P (v).

Proof. It is enough to show that elementary Knuth transformation does not alter the P -symbol.This is equivalent to showing that if T is a tableau not containing x, y or z then T ← zxy = T ← xzyand T ← yzx = T ← yxz. We show this by induction on the number of rows of T .

So assume that T has one row. We show that T ← zxy = T ← xzy. One can show T ← yxz =T ← yzx in the case when T has one row by a similar examination of cases. Now, label the entriesof T as t1 < t2 < · · · < tm. There are seven possibilities which we examine case by case:

1) tm < x < y < z:

T ← zxy = t1 t2ti z

. . . tm x y = T ← xzy

2) ti−1 < x < ti < · · · < tm < y < z:

T ← zxy = t1 t2ti z

. . . x . . . tm y = T ← xzy

3) ti−1 < x < ti < · · · < tj−1 < y < tj < · · · < tm < z:

T ← zxy = t1 t2ti tj

. . . x . . . y . . . tm z = T ← xzy

4) ti−1 < x < ti < · · · < tj−1 < y < tj < · · · < tk−1 < z < tk < · · · < tm:

T ← zxy =t1 t2ti tjtk

. . . x . . . y . . . z . . . tm= T ← xzy

5) ti−1 < x < y < ti < · · · < tk−1 < z < tk < · · · < tm:

T ← zxy =t1 t2ti tjtk

. . . x y . . . z . . . tm= T ← xzy (tj := ti+1)

6) ti−1 < x < ti < · · · < tk−1 < y < z < tk < · · · < tm:

T ← zxy =t1 t2ti ztk

. . . x . . . y . . . tm= T ← xzy

18

Page 31: álgebras de hecke

7) ti−1 < x < y < z < ti:

T ← zxy =t1 t2z tjti

. . . x y . . . tm= T ← xzy (tj := ti+1)

Hence the result is true if T has one row.

Now assume that T has r rows. Let R be the first row and S be the tableau obtained from T bydeleting the first row. Let w be the word bumped into the next row by R ← zxy and w′ be thecorresponding word for R← xzy. Then, by examining the above cases there are three possibilities.In cases 1, 2 and 3 we have w = w′ and so S ← w = S ← w′ and hence T ← zxy = T ← xzy. Incases 4, 5 and 6, w has the form z′x′y′ with x′ < y′ < z′ and w′ = x′z′y′ and hence, by induction,S ← w = S ← w′ forcing T ← zxy = T ← xzy. Finally, in case 7, w has the form y′x′z′

with x′ < y′ < z′ and w′ = y′z′x′. Hence we still have S ← w = S ← w′ by induction. ThatT ← yxz = T ← yzx in the general case follows by a similar induction.

Given a tableau T we form the tableau word, denoted w(T ), by reading off the entries of the tableaufrom left to right and bottom to top. For example if:

T =1 3 4 52 7 96 8

Then w(T ) = 682791345. Given a tableau word w1w2 . . . wn we can recover the rows of the tableauby splitting the word into its increasing sequences. In the example above we split up w(T ) as68|279|1345 from which we regain the tableau’s rows in order from bottom to top. The followinglemma justifies our choice of the order in which the tableau word is formed:

Lemma 2.6.2. If T is a tableau then T = ∅ ← w(T ).

Proof. This is just a matter of examining the row bumping algorithm as w(T ) is inserted. Letw(T ) = v1v2 . . . va|va+1 . . . vb| . . . | . . . vm be the tableau word of T broken up into increasing se-quences. Then:

∅ ← v1v2 . . . va = v1 v2 . . . va

Now, va+1 < v1, va+2 < v2, etc. and so (with vi = va+1 and vj = v2a):

∅ ← v1v2 . . . vava+1 . . . vb = vi

v1. . .. . . vj

va

. . . vb

This process continues: each successive insertion of an increasing sequence shifts all the rows downand places the new row on top, until the original tableau is regained.

Lemma 2.6.3. If T is a tableau then w(T ← v) ≡ w(T )v.

Proof. First consider the case when T has one row so that w(T ) = w1w2 . . . wm with w1 < w2 <· · · < wm. If v is larger than all of the first row then w(T ← v) = w1w2 . . . wmv = w(T )v and so

19

Page 32: álgebras de hecke

w(T ← v) ≡ w(T )v. So assume that v bumps wi so that wj > v for all j ≥ i and wj < v if j < i.Then:

w(T )v = w1w2 . . . wi−1wiwi+1 . . . wm−1wmv

≡ w1w2 . . . wi−1wiwi+1 . . . wm−1vwm (yzx↔ yxz)≡ w1w2 . . . wi−1wivwi+1 . . . wm−1wm (yzx↔ yxz)≡ w1w2 . . . wiwi−1vwi+1 . . . wm−1wm (xzy ↔ zxy)≡ wiw1w2 . . . wi−1vwi+1 . . . wm−1wm (xzy ↔ zxy)= w(T ← v)

Now if T has more than one row, label the words associated with the rows of T as r1, r2 . . . rp sothat w(T ) = rprp−1 . . . r2r1. Label the rows of T ← v as r′1, r

′2, . . . , r

′q (with q = p or q = p+1) and

let v = v1 < v2 · · · < vr be the bumping sequence of T ← v. From above we have that rivi ≡ vi+1r′i

for all i ≤ r. Hence:

w(T )v = rprp−1 . . . r2r1v

≡ rprp−1 . . . r2v2r′1

≡ rprp−1 . . . v3r′2r′1

≡ r′qr′q−1 . . . r′2r

′1

= w(T ← v)

The above Lemma allows us to prove:

Theorem 2.6.4. Let u, v ∈ Symn. Then u ≡ v if and only if their P -symbols coincide.

Proof. We have already seen (Proposition 2.6.1) that u ≡ v implies that P (u) = P (v). So assumethat P (u) = P (v). Then repeated application of the above lemma shows that u ≡ w(∅ ← u).Hence u ≡ w(∅ ← u) = w(∅ ← v) ≡ v and so u and v are Knuth equivalent.

2.7 Tableau Descent Sets and Superstandard Tableaux

Recall that in Chapter 1 we defined the left and right descent sets of a permutation w = w1w2 . . . wn ∈Symn as the sets L(w) = {r ∈ S |rw < w} and R(w) = {r ∈ S |wr < w}. We also showed inLemma 1.4.1 that si ∈ R(w) if and only if wi > wi+1. We wish to introduce a similar concept fortableaux. If P is a tableau, let D(P ) be the set of i for which i + 1 lies strictly below and weaklyleft of i in P . We call this the tableau descent set. For example:

if P =1 3 7 82 5 94 6

then D(P ) = {1, 3, 5, 8}

The following proposition shows that the left and right descent sets of a permutation w ∈ Symn

can be characterised entirely in terms of the descent sets of the P and Q-symbols of w:

Proposition 2.7.1. Let w ∈ Symn and suppose that w ∼ (P,Q):(i) We have si ∈ L(w) if and only if i ∈ D(P ).(ii) We have si ∈ R(w) if and only if i ∈ D(Q).

20

Page 33: álgebras de hecke

Proof. We prove (ii) first. This is a matter of reinterpreting the Row Bumping Lemma (Lemma2.2.1). Fix i and let R = ∅ ← w1w2 . . . wi−1 (with R = ∅ if i = 1). If si /∈ R(w) then wi < wi+1

(Lemma 1.4.1) and by the Row Bumping Lemma the new box of R← wi is strictly left and weaklybelow the new box of (R← wi)← wi+1. Thus i is strictly left and weakly below i + 1 in Q and soi /∈ D(Q). On the other hand if si ∈ R(w) then wi > wi+1 (Lemma 1.4.1 again) and the row RowBumping Lemma gives us that the new box of (R ← wi) ← wi+1 is weakly left and strictly belowthe new box of R← wi and so i ∈ D(Q). Hence si ∈ R(w) if and only if i ∈ D(Q).

For (i), we have si ∈ L(w) if and only if si ∈ R(w−1) (Lemma 1.4.2) which, by (ii), occurs if andonly if i ∈ D(Q(w−1)) = D(P ) (since Q(w−1) = P (w) by the Symmetry Theorem).

It will be an important question in what follows as to what extent a tableau is determined by itsdescent set. It is easy to see that two tableaux can have the same descent set and be unequal: forexample 1 2

3 4 and 1 2 43 both have descent set {2}. One might hope that the shape of a tableau

together with its descent set determines it uniquely. This also turns out to be false as can be seenby considering the following two tableaux:

P =

1 3 42 75 86

Q =

1 3 42 56 78

Clearly P 6= Q but both tableaux have descent set {1, 4, 5, 7}.

Because of these problems we seek a suitable ‘test’ tableau P which has the property that anytableau with the same shape and descent set must be equal to P . To this end, let λ be a partitionand define the column superstandard tableau of shape λ, denoted Sλ, to be the tableau obtainedfrom λ by filling the diagram of λ with 1, 2, . . . successively down each column. For example, ifλ = (3, 3, 2, 1) then the column superstandard tableau of shape λ is:

Sλ =

1 5 82 6 93 74

The following lemma shows that Sλ has our required ‘test’ property:

Lemma 2.7.2. Suppose Shape(P ) = λ and D(Sλ) ⊂ D(P ) then P = Sλ.

Proof. We claim that there is only one way in which to fill a diagram of λ with {1, 2, . . . , n} inorder to satisfy the descent set condition. Let c1 ≥ c2 ≥ · · · ≥ cm be the column lengths of Sλ.Then {1, 2, . . . , c1 − 1} ⊂ D(Sλ) ⊂ D(P ) and so, in filling the diagram of λ, 2 must lie below 1, 3must lie below 2, etc. So the first column must consist of 1, 2, . . . , c1. We have now filled the firstcolumn of λ and so c1 + 1 must lie in the first box of the second column. We can then repeat thesame argument to get that the second column of P must consist of c1 +1, c1 +2, . . . , c2. Proceedingin this fashion we see that there is only one way to fill a diagram of λ so that D(Sλ) is a subset ofthe descent set. Hence any tableau of shape λ and descent set contained in D(Sλ) must be equalto Sλ. Thus P = Sλ.

21

Page 34: álgebras de hecke

2.8 The Dominance Order

If λ and µ are partitions we say that λ is dominated by µ, and write λ E µ, if λ1 + λ2 + · · ·+ λk ≤µ1 + µ2 + · · · + µk for all k ∈ N. Clearly, λ E λ for all partitions λ. If λ E µ E λ thenλ1 ≤ µ1 ≤ λ1 and so λ1 = µ1. Similarly, λ1 + λ2 ≤ µ1 + µ2 ≤ λ1 + λ2 and so λ2 = µ2. We cancontinue in this fashion to see that λi = µi for all i and so λ = µ. Lastly, if λ E µ E π thenλ1 + · · ·+ λk E µ1 + · · ·+ µk E π1 + · · ·+ πk for all k and so λ E π. These calculations verify thatE is a partial order. We call E the dominance order. In general the dominance order is not a totalorder: for example and are incomparable partitions of 6.

We give a more intuitive interpretation of the dominance order as follows. Recall that a box in apartition or tableau is indexed by a pair (i, j) (where i is the number of rows from the top and jis the number of columns from the left) and that an outside corner is a box (i, j) such that neither(i + 1, j) nor (i, j + 1) are boxes of λ. Now define an inside corner as a location (i, j) which is nota box of λ, such that either j = 1 and (i − 1, j) is a box of λ, i = 1 and (i, j − 1) is a box of λ,or (i − 1, j) and (i, j − 1) are boxes of λ. For example, in the following diagram (of the partition(5, 4, 2, 1, 1)) the outside corners are marked with an ‘o’ and the inside corners with an ‘i’:

o io i

o ii

io

Given a partition λ, an inside corner (i, j) and an outside corner (k, l) strictly below (i, j) we letλ′ be obtained from λ by deleting the box at (k, l) and inserting a box at (i, j). We say thatλ′ is obtained from λ by a raising operation. Intuitively, raising operations correspond to slidingoutside corners upwards into inside corners. For example, we can apply two raising operations tothe partition and the outside corner (3, 1): corresponding to the inside corner (2, 2) we get

and corresponding to the inside corner (1, 3) we get . The following diagram illustratesthat all partitions of 6 can be obtained from the partition (1, 1, 1, 1, 1, 1) = (16) by applying raisingoperations:

$$IIIII

&&LLLLL

// //

::vvvvv

%%KKKKK

99sssss

&&MMMMM// //

99rrrrr77ppppp

It turns out that the dominance order can be entirely characterised in terms of raising operations:

Lemma 2.8.1. Let µ and λ be partitions of the same weight. Then µ E λ if and only if λ can beobtained from µ by a sequence of raising operations.

Proof. If µ′ is obtained from µ by a raising operations then µ′i = µi + 1 and µ′j = µj − 1 for somei < j and hence µ E µ′. This shows that if λ can be obtained from µ by raising operations thenµ E λ.

22

Page 35: álgebras de hecke

For the opposite implication we induct on n =∑|λk − µk|. If n = 0 then λ = µ and there is

nothing to prove. So assume that n > 0 and fix the first i for which µi < λi. Then µi < µi−1

(otherwise µi = µi−1 = λi−1 ≥ λi) and so (i, µi + 1) is an inside corner. Since µi < λi and |µ| = |λ|we must have either µj > λj for some j or l(µ) > l(λ).

If µj > λj for some j, fix the largest j for which this holds. Then µj > µj+1 (otherwise µj = µj+1 ≤λj+1 ≤ λj) and so (j, µj) is an outside corner. Now, let µ′ be obtained from µ by performing araising operation from the outside corner (j, µj) to the inside corner (i, µi + 1). Then:

µ′k =

µk if k 6= i, j

µk + 1 if k = i

µk − 1 if k = j

Since µi < λi and µj > λj we have∑|λk − µ′k| < n and we can conclude, by induction, that λ can

be obtained from µ by raising operations.

On the other hand if l(µ) > l(λ) then (l(µ), µl(µ)) is an outside corner and so we let µ′ be obtainedfrom µ by performing a raising operation from the outside corner (l(µ), µl(µ)) to the inside corner(i, µi + 1). Then µ′ is given by:

µ′k =

µk if k 6= i, l(µ)µk + 1 if k = i

µk − 1 if k = l(µ)

Since µi < λi and λl(µ) = 0 we have∑|λk − µ′k| < n and so the result follows by induction as

above.

Recall that in the previous section we introduced the column superstandard tableau Sλ as a useful‘test’ tableau. The following proposition shows that Sλ also has useful properties with respect tothe dominance order:

Proposition 2.8.2. Let λ be a partition of n and P a standard tableau of size n satisfying D(P ) ⊇D(Sλ) . Then Shape(P ) E λ, with equality of shapes if and only if P = Sλ.

Proof. The statement that Shape(P ) = λ if and only if P = Sλ is Lemma 2.7.2. It remains toshow that Shape(P ) E λ. The proof is by induction on n with the case n = 1 being obvious. Now,fix n and let r and s be the row number of n in Sλ and P respectively. Also, let P ′ be the tableauobtained from P by deleting the box containing n. Define λ′ by:

λ′i =

{λi − 1 if i = r

λi if i 6= r

Thus, Sλ′ is obtained from Sλ by deleting the box containing n. Lastly, define µ′ = Shape(P ′)and µ = Shape(P ). We have D(Sλ′) = D(Sλ)\{n − 1} and D(P ′) = D(P )\{n − 1} and soD(Sλ′) ⊂ D(P ′). Thus we can apply induction to conclude that Shape(P ′) E λ′. There are twopossibilities:

Case 1: n − 1 /∈ D(Sλ). In this case n does not lie below n − 1 in Sλ and so must belong to thefirst row. Hence λ1 > λ′1 and so:

µ′1 + µ′2 + · · ·+ µ′k < λ1 + λ2 + · · ·+ λk for all k (2.8.1)

23

Page 36: álgebras de hecke

Now, for some s, µs = µ′s + 1 and µi = µ′i if i 6= s. Hence:

µ1 + µ2 + · · ·+ µk ≤ λ1 + λ2 + · · ·+ λk for all k

Hence µ E λ.

Case 2: n−1 ∈ D(Sλ). By definition of Sλ we know that {n−1, n−2, . . . , n−r+1} ⊆ D(Sλ) ⊆ D(P ).Hence, in P , n must occur below n−1, n−1 must occur below n−2, . . . , and n− r +2 must occurbelow n− r + 1. Hence s ≥ r. But λi = λ′i and µi = µ′i except when i = r and i = s respectively inwhich case λi = λ′i + 1 and µi = µ′i + 1. Given that µ′ E λ′ the fact that s ≥ r immediately impliesµ E λ.

2.9 Notes

1. The Robinson-Schensted correspondence (as well as the more general Robinson-Schensted-Knuth correspondence) is well known. A good general reference is Fulton [10].

2. Schutzenberger has developed an entirely different framework for viewing the combinatoricsof tableaux known as jeu de taquin (which is French for ‘teasing game’). In this frameworkwe define a skew-tableau as a set of tiles in N× N which can be manipulated by performingcertain ‘slides’. We allow four slides (viewing empty squares as having the value ∞):

x←y ↓

x y y↑x

→xy

x < y

Schutzenberger describes the construction of the P -symbol of a permutation as follows. Letλn = (n, n−1, . . . , 2, 1) be the ‘staircase diagram’. Now, given a permutation w = w1w2 . . . wm

insert the permutation into λn\λn−1 from bottom to top and perform slides until a non-skewtableau is obtained. For example, if w = 4312 ∈ Sym4 this process yields:

21

34

∼1 2

34

∼1 23

4∼

1 234

∼1 234

Note that this is the same tableau as that obtained by row inserting 4312 into ∅. Similarlydefine the Q-symbol of the permutation as the tableau obtained by sliding into shape the‘staircase tableau’ of w−1. To see that these two alternative definitions are equivalent onedefines the word of a skew tableau and shows that it is Knuth equivalent to any other wordobtained from the skew-tableau by jeu de taquin slides. See Shutzenberger [27].

3. Most authors refer to a very different proof of the symmetry theorem due to Knuth [21].However this proof gives little insight into why the theorem is true. The method of proofvia growth diagrams was discovered by Fomin [9]. Most authors refer to Stanley [30] for anunderstanding of growth diagrams. Stanley first introduces the “local rules” and then showsinductively that the entries of the growth diagram can be given an alternate description interms of partial permutations. Our approach in deriving the local rules from the intuitivedefinition seems more motivated.

4. Knuth equivalence is treated in most books on tableaux. Our proof of the main theorem(that two permutations share the same P -symbol if and only if they are Knuth equivalent)is unique in that it uses only two basic lemmas (that w(T ← x) ≡ w(T )x and that u ≡ vimplies P (u) = P (v)) to derive the result.

24

Page 37: álgebras de hecke

5. The term ‘raising operation’ is due to MacDonald [23]. He treats them in the more generalsetting of vectors in Zn. Here partitions of a fixed length emerge as a fundamental domainfor the natural action of Symn.

6. Our treatment of the descent of a tableau and superstandard tableaux is based on Garsia-McLarnan [11]. Their proof of Proposition 2.8.2 uses an elegant combinatorial argument.We could not reproduce it here because it relies on the concept of semi-standard tableaux(tableaux in which it is possible to have repeated entries) which we have not treated.

25

Page 38: álgebras de hecke

3 The Hecke Algebra and Kazhdan-Lusztig Basis

In this chapter we introduce the Hecke algebra of the symmetric group. As outlined in the intro-duction, the Hecke algebra provides a useful structure to discuss the representation theory of boththe symmetric group and the general linear group over a finite field. However, the representationtheory of the Hecke algebra is difficult. The goal of this chapter is to introduce the Kazhdan-Lusztigbasis (and the associated Kazhdan-Lusztig polynomials) as a means of making the representationtheory of the Hecke algebra more tractable.

3.1 The Hecke Algebra

Let A = Z[q, q−1] be the ring of Laurent polynomials in the indeterminate q. The Hecke algebraof the symmetric group, denoted Hn(q), is the algebra of A with identity element Tid generated byelements Ti for 1 ≤ i < n, subject to the relations:

TiTj = TjTi if |i− j| ≥ 2 (3.1.1a)TiTi+1Ti = Ti+1TiTi+1 for 1 ≤ i < n− 1 (3.1.1b)

T 2i = (q − 1)Ti + qTid (3.1.1c)

The reader may notice the similarity between the above relations and those of the symmetric groupin Theorem 1.2.11. In fact, if we set q = 1, then Hn(q) is isomorphic to the group algebra of Symn.It is for this reason that Hn(q) is often referred to as a deformation of the symmetric group algebra;as in the case of the symmetric group (3.1.1a) and (3.1.1b) are known as the braid relations. Thelast relation (3.1.1c) is known as the quadratic relation.

It will be useful to fix a set of elements which span Hn(q). If si1si2 . . . sim and sj1sj2 . . . sjm arereduced expression for w ∈ Symn then, by Corollary 1.2.3, it is possible to obtain sj1sj2 . . . sjm fromsi1si2 . . . sim using only the braid relations. Hence, we must have Ti1Ti2 . . . Tim = Tj1Tj2 . . . Tjm sincethe braid relations also hold in Hn(q). So if we define Tw = Ti1Ti2 . . . Tim we get a well definedelement of Hn(q) for each w ∈ Symn. In particular, Ti = Tsi for all si ∈ S.

Now, fix w ∈ Symn and let sj ∈ S be arbitrary. If wsj > w and si1si2 . . . sim is a reduced expressionfor w then si1si2 . . . simsj is a reduced expression for wsj . So, by the way that we have defined Tw,we have TwTsj = Twsj . On the other hand, if wsj < w then, by Corollary 1.1.7, w has a reducedexpression si1si2 . . . sim ending in sj (so that wsj = si1si2 . . . sim−1). Hence:

TwTsj = Ti1Ti2 . . . TimTj

= Ti1Ti2 . . . T 2im (since im = j)

= Ti1Ti2 . . . Tim−1((q − 1)Tim + qTid)= (q − 1)Tw + qTwsj

Therefore we have:

TwTsj ={

Twsj if wsj > w(q − 1)Tw + qTwsj if wsj < w

(3.1.2)

An identical argument yields a similar identity for TsjTw (see (3.4.1)). These identities show thatleft and right multiplication by Tj ( = Tsj ) map the A–span of Tw for all w ∈ Symn into itself.Since Hn(q) is generated by the Ti we can conclude that the Tw span Hn(q).

26

Page 39: álgebras de hecke

We will not prove the following theorem. For the proof see, for example, Humphreys [15] or Mathas[24].

Theorem 3.1.1. Hn(q) is free as an A–module with basis Tw for w ∈ Symn.

We will refer to {Tw |w ∈ Symn} as the standard basis.

3.2 The Linear Representations of Hn(q)

Recall that a representation of Hn(q) is a ring homomorphism ρ : Hn(q) → EndA(M) where Mis some A-module. We consider the representations afforded by Hn(q) when the module M is assimple as possible—the ring A itself. Such representations are called linear representations. Wehave that EndA A ∼= A (since every element ϕ ∈ EndA A is uniquely determined by ϕ(1)) and sorepresentations φ : Hn(q) → EndA A are equivalent to homomorphisms ρ : Hn(q) → A. Now,recalling our convention that homomorphisms must preserve the identity, we have that ρ(Tid) = 1.Using the relation (3.1.1c) we get:

(ρ(Ti))2 = ρ(T 2i ) = ρ((q − 1)Ti + qTid) = (q − 1)ρ(Ti) + q

Hence (ρ(Ti) + 1)(ρ(Ti) − q) = 0. Thus either ρ(Ti) = q or ρ(Ti) = −1. Now, if ρ(Ti) = q then(3.1.1b) forces ρ(Tj) = q for all j. Similarly, if ρ(Ti) = −1 then we have ρ(Tj) = −1 for all j.If w ∈ Symn is arbitrary we can choose a reduced expression w = si1si2 . . . sim and since ρ is ahomomorphism we have:

ρ(Tw) = ρ(Ti1Ti2 . . . Tim) = ρ(Ti1)ρ(Ti2) . . . ρ(Tim)

Hence we have either ρ(Tw) = q`(w) or ρ(Tw) = (−1)`(w).

It is straightforward to verify that both possibilities preserve the relations in (3.1.1) and thereforedefine representations of Hn(q). These two representations occur throughout Kazhdan-Lusztigtheory and are given special notation: we write qw = q`(w) and εw = (−1)`(w). These are theq-analogues of the trivial and sign representations of the symmetric group.

3.3 Inversion in Hn(q)

The involution ι which we will introduce in the next section is central to the definition of theKazhdan-Lusztig basis for Hn(q). However, before we can introduce ι, we need to better understandhow inversion works in Hn(q). We start with a technical lemma:

Lemma 3.3.1. Let r1r2 . . . rm be a (possibly unreduced) subexpression of a reduced expression forw. Then for some ax ∈ A we have:

Tr1Tr2 . . . Trm =∑x≤w

axTx

Proof. If r1r2 . . . rm is reduced then Tr1Tr2 . . . Trm = Tr1r2...rm and the result follows since r1r2 . . . rm ≤w by Proposition 1.3.2. So assume that r1r2 . . . rm is unreduced and fix the first i for which`(r1r2 . . . ri) > `(r1r2 . . . ri+1) so that r1r2 . . . ri is reduced. Also, `(r1r2 . . . ri+1) = i − 1 (since

27

Page 40: álgebras de hecke

`(r1r2 . . . ri) = i) and so, by the deletion condition, there exists p and q such that r1r2 . . . rp . . . rq . . . ri+1

is a reduced expression for r1r2 . . . ri+1. Hence:

Tr1Tr2 . . . Trm = Tr1r2...riTri+1Tri+2 . . . Trm (since r1r2 . . . ri is reduced)= ((q − 1)Tr1r2...ri + qTr1...ri+1)Tri+2 . . . Trm (since r1r2 . . . ri+1 < r1r2 . . . ri)

= (q − 1)Tr1Tr2 . . . Tri+1 . . . Trm+

+ qTr1 . . . Trp . . . Trq . . . Trm (since r1 . . . rp . . . rq . . . ri+1 is reduced)

Now r1r2 . . . ri+1 . . . rm and r1r2 . . . rp . . . rq . . . rm are both subexpressions of a reduced expressionfor w (since r1r2 . . . rm is) and both have less than m terms. Hence the result follows by inductionon m.

This allows us to prove:

Proposition 3.3.2. For all w ∈ Symn the element Tw is invertible. Moreover, for all w ∈ Symn

there exists Rx,w ∈ A with Rw,w = q−`(w) such that:

T−1w−1 =

∑x≤w

Rx,wTx

Proof. A simple calculation shows that, for all r ∈ S, Tr is invertible with inverse

T−1r = q−1Tr + (q−1 − 1)Tid (3.3.1)

If w ∈ Symn is arbitrary, fix a reduced expression r1r2 . . . rm for w. Then:

T−1w−1 = T−1

(r1r2...rm)−1

= (TrmTrm−1 . . . Tr1)−1

= (Tr1)−1(Tr2)

−1 . . . (Trm)−1

= (q−1Tr1 + (q−1 − 1)Tid)(q−1Tr2 + (q−1 − 1)Tid) . . . (q−1Trm + (q−1 − 1)Tid)

Now, expanding the right hand side we see that every term is of the form Tri1Tri2

. . . Trikwhere

ri1ri2 . . . rik is a subexpression of r1r2 . . . rm. By the above lemma, for all such subexpressions wecan write Tri1

Tri2. . . Trik

as a linear combination of Tx with x ≤ w. Hence we can write the righthand side above as a linear combination of Tx with x ≤ w. Since r1r2 . . . rm is reduced the onlyway that Tw can emerge is (q−1Tr1)(q

−1Tr1) . . . (q−1Trm) = q−`(w)Tw and hence Rw,w = q−`(w).

3.4 An Involution and an anti-Involution

Let R be a ring and ϕ : R→ R a function. We say that ϕ is an involution if it is a homomorphismand has order two (that is, ϕ2 is the identity on R). We say that ϕ is an anti-involution if it hasorder two and satisfies ϕ(a + b) = ϕ(a) + ϕ(b) and ϕ(ab) = ϕ(b)ϕ(a) for all a, b ∈ R (note thereversal of order). In this chapter we introduce an involution ι and an anti-involution ∗ which arefundamental tools in what follows: the involution ι is crucial to the definition of the Kazhdan-Lusztig basis and ∗ will be useful in relating the many left and right identities as well as provingcrucial when we come to discuss cellular algebras in Chapter 5.

28

Page 41: álgebras de hecke

We begin with the anti-involution ∗. Define T ∗si= Tsi and extend so that ∗ is A-linear and (TxTy)∗ =

T ∗y T ∗x for all x, y ∈ Symn. Note that this implies that (Ti1Ti2 . . . Tim)∗ = TimTim−1 . . . Ti1 . To verifythat ∗ is well defined it is enough to verify that ∗ preserves the relations in (3.1.1). However this isstraightforward:

(TiTj)∗ = TjTi = TiTj = (TjTi)∗ if |i− j| ≥ 2(TiTi+1Ti)∗ = TiTi+1Ti = Ti+1TiTi+1 = (Ti+1TiTi+1)∗

(T 2i )∗ = T 2

i = (q − 1)Ti + qTid = ((q − 1)Ti + qTid)∗

If w ∈ Symn fix a reduced expression si1si2 . . . sim . Then simsim−1 . . . si1 is a reduced expressionfor w−1 by Lemma 1.1.3 and T ∗w = (Ti1 . . . Tim)∗ = TimTim−1 . . . Ti1 = Tw−1 . Thus we could havedefined ∗ by: ( ∑

y∈Symn

ayTy

)∗ =∑

y∈Symn

ayTy−1

This makes it clear that ∗ has order 2. Since, by definition, ∗ satisfies (TxTy)∗ = T ∗y T ∗x we have(ab)∗ = b∗a∗ for all a, b ∈ Hn(q) since ∗ is A-linear. Thus ∗ is an anti-involution.

To illustrate the usefulness of ∗ we derive a left multiplication formula using (3.1.2). Applying ∗

yields:

(TwTr)∗ = TrTw−1 ={

T ∗wr if wr > w(q − 1)T ∗w + qT ∗wr if wr < w

={

Trw−1 if wr > w(q − 1)Tw−1 + qTrw−1 if wr < w

Now Lemma 1.4.2 shows that wr > w if and only if (wr)−1 = rw−1 > w−1 and similarly forwr < w. Substituting w for w−1 yields:

TrTw ={

Trw if rw > w(q − 1)Tw + qTrw if rw < w

(3.4.1)

This is our desired left-handed relation. An argument similar to the one used above is often usedto gain identities when only a left-hand or right-hand identity is known.

We now introduce the involution ι. Recall that the Hecke algebra is defined over the ring A =Z[q, q−1] of Laurent polynomials in q. Define a function – : A → A by F (q) = F (q−1) for allF (q) ∈ A. If is straightforward to verify that this defines a homomorphism of A of order 2 andhence is an involution. In fact, this involution extends to Hn(q):

Proposition 3.4.1. The involution – : A→ A extends to an involution ι of the whole of Hn(q) bydefining:

ι( ∑

w∈Symn

Fw(q)Tw

)=

∑w∈Symn

Fw(q)T−1w−1

Proof. Define ι(Ti) = T−1i and extend ι multiplicatively by letting ι(Ti1Ti2 . . . Tim) = ι(Ti1)ι(Ti2) . . . ι(Tim).

In particular ι(Tw) = ι(Ti1)ι(Ti2) . . . ι(Tik) if si1si2 . . . sik is a reduced expression for w. Lastly, ex-tend ι additively by defining:

ι( ∑

w∈Symn

Fw(q)Tw

)=

∑w∈Symn

Fw(q)ι(Tw)

29

Page 42: álgebras de hecke

To show that ι is a homomorphism it is enough to verify that ι preserves the relations of (3.1.1).That is, that:

ι(Ti)ι(Tj) = ι(Tj)ι(Ti) if |i− j| ≥ 2 (3.4.2a)ι(Ti)ι(Ti+1)ι(Ti) = ι(Ti+1)ι(Ti)ι(Ti+1) (3.4.2b)

ι(Ti)2 = (q − 1)ι(Ti) + qι(Tid) = (q−1 − 1)ι(Ti) + (q−1)Tid (3.4.2c)

By taking inverses in (3.1.1a) and (3.1.1b) we get T−1i T−1

j = T−1j T−1

i if |i−j| ≥ 2 and T−1i+1T

−1i T−1

i+1 =T−1

i+1T−1i T−1

i+1 for 1 ≤ i < n − 1 which are (3.4.2a) and (3.4.2b). We get (3.4.2c) by substitutingthe expression for T−1

i in (3.3.2). Hence ι is a homomorphism. To verify that ι is an involution itis enough to verify that ι2(Ti) = Ti for all 1 ≤ i < n since ι is a homomorphism. Again, this is astraightforward calculation using (3.3.2) and the definition of ι. Lastly, note that if s1s2 . . . sm is areduced expression for w then

ι(Tw) = ι(Ti1)ι(Ti2) . . . ι(Tim) = T−1i1

T−1i2

. . . T−1im

= (TimTim−1 . . . Ti1)−1 = T−1

w−1

since simsim−1 . . . si1 is a reduced expression for w−1 (see Lemma 1.1.3).

Our last result of this section relates the action of ι and ∗ on Hn(q):

Proposition 3.4.2. The functions ι and ∗ on Hn(q) commute.

Proof. Let y ∈ Symn be arbitrary and let si1si2 . . . sim be a reduced expression for y (so thatsimsim−1 . . . si1 is a reduced expression for y−1 by Lemma 1.1.3). Then:

ι(Ty)∗ = (ι(Ti1)ι(Ti2) . . . ι(Tim))∗ = ι(Tim)ι(Tim−1) . . . ι(Ti1) = ι(Ty−1) = ι(T ∗y )

Hence ι and ∗ commute on the standard basis. Now let∑

y∈SymnayTy ∈ Hn(q) be arbitrary. Then:

ι( ∑

y∈Symn

ayTy

)∗=

∑y∈Symn

ayι(Ty)∗ =∑

y∈Symn

ayι(T ∗y ) = ι((∑

y∈Symn

ayTy)∗)

3.5 The Kazhdan-Lusztig Basis

In the previous section the involution ι of Hn(q) was introduced. One of the major breakthroughsof Kazhdan and Lusztig [17] was to notice that a special set of elements fixed by ι form a basis forHn(q). It turns out that this basis is parametrised by w ∈ Symn and that each element is a linearcombination of Tx where x is less than or equal to w in the Bruhat order. Some investigation showsthat we can achieve much simpler coefficients of Tx for x ≤ w if we allow our coefficients to lie inthe larger ring Z[q

12 , q−

12 ]. For example if r ∈ S, and we want A(q)Tr +B(q)Tid to be fixed by ι the

simplest solution with A(q), B(q) ∈ Z[q, q−1] is A(q) = q−1 +1 and B(q) = −q. On the other hand,if we allow polynomials in the larger ring Z[q

12 , q−

12 ] we can have A(q

12 ) = q−

12 and B(q

12 ) = −q

12 :

ι(q−12 Tr − q

12 Tid) = q

12 (q−1Tr + (q−1 − 1)Tid)− q−

12 Tid (by (3.3.1))

= q−12 Tr + q−

12 Tid − q

12 Tid − q−

12 Tid

= q−12 Tr − q

12 Tid

30

Page 43: álgebras de hecke

For this reason (and others mentioned in the notes to this chapter) we redefine A to be the largerring Z[q

12 , q−

12 ]. Also, for the sake of clarity, we will often omit the parenthesised q

12 when referring

to a polynomial in A. Thus we will write F rather than F (q12 ).

For the rest of this section we will be concerned with offering a proof of the following fundamentaltheorem of Kazhdan and Lusztig [17]:

Theorem 3.5.1. For all w ∈ Symn there exists a unique element Cw such that ι(Cw) = Cw and

Cw =∑y≤w

εyεwq12wq−1

y Py,wTy

where Py,w ∈ Z[q] ⊂ A is of degree at most 12(`(w)− `(y)− 1) for y < w and Pw,w = 1.

Before we prove the theorem notice that, assuming the validity of the theorem, the Cw certainlyform a basis. For, if we fix a total ordering of Symn compatible with the Bruhat order, thenthe matrix of the linear map sending Cw to Tw is upper triangular with powers of q−

12 down the

diagonal (since Pw,w = 1). Hence the determinant of the map is a power of q−12 which is a unit

in A and so the map is invertible. The Cw basis is referred to as the Kazhdan-Lusztig basis. Thepolynomials Px,w are the Kazhdan-Lusztig polynomials.

We begin our proof by showing that, for each w ∈ Symn there is at most one possible Cw whichsatisfies the conditions of the theorem.

Proof of Uniqueness. For fixed w ∈ Symn we induct on `(w) − `(x) for x ≤ w and argue thatthere is at most one possible choice for Px,w. This immediately implies the uniqueness of Cw. If`(w) − `(x) = 0 then x = w and so Px,w = 1 by assumption. Now for x < w assume that Py,w isknown for all x < y ≤ w. Using that ι(Cw) = Cw we obtain:

Cw =∑z≤w

εzεwq12wq−1

z Pz,wTz = ι(∑

y≤w

εyεwq12wq−1

y Py,wTy

)=∑y≤w

εyεwq− 1

2w qyPy,wT−1

y−1

Substituting our expression for T−1y−1 in Section 3.3 yields:

∑z≤w

εzεwq12wq−1

z Pz,wTz =∑y≤w

εyεwq− 1

2w qyPy,w

∑z≤y

Rz,yTz

Now {Tw} forms a basis for Hn(q) and so we can fix x ≤ w and equate coefficients of Tx to obtain:

εxεwq12wq−1

x Px,w =∑

yx≤y≤w

εyεwq− 1

2w qyPy,wRx,y

= εxεwq− 1

2w qxPx,wRx,x +

∑y

x<y≤w

εyεwq− 1

2w qyPy,wRx,y

Rearranging and using the fact that Rx,x = q−1x (Proposition 3.3.2) yields:

εxεw

(q

12wq−1

x Px,w − q− 1

2w Px,w

)=

∑y

x<y≤w

εyεwq− 1

2w qyPy,wRx,y

31

Page 44: álgebras de hecke

Finally, multiplying both sides by εxεwq12x yields:

q12wq

− 12

x Px,w − q− 1

2w q

12x Px,w =

∑y

x<y≤w

εyεxq− 1

2w qyq

− 12

x Py,wRx,y (3.5.1)

By induction the right hand side is known. Now, by assumption Px,w has degree at most 12(`(w)−

`(x)−1) and so q12wq

− 12

x Px,w is a polynomial in q12 in which all terms have degree at least 1. Similarly

all terms in q− 1

2w q

12x Px,w have degree at most −1

2 . Hence no cancellation occurs between q12wq

− 12

x Px,w

and q− 1

2w q

12x Px,w and so (3.5.1) has at most one solution for Px,w.

We now turn to the existence of the Cw. However, before we begin, some further notation is needed.If Cw exists write x ≺ w if Px,w has degree 1

2(`(w)− `(x)− 1) (the largest allowed by the theorem).Since 1

2(`(w)− `(x)− 1) is only an integer if `(w)− `(x) ≡ 1 (mod 2) we have εw = −εx if x ≺ w.Define µ(x, w) to be the coefficient of q

12(`(w)−`(x)−1) in Px,w. Thus µ(x,w) is only defined if x ≤ w

and µ(x,w) 6= 0 if and only if x ≺ w. If x � w it is conventional (and sensible) to define Px,w = 0.It will be seen that the relation ≺ and the function µ are of fundamental importance.

Proof of Existence. Setting Cid = Tid clearly satisfies the conditions of the theorem. Now considerthe case when r ∈ S. We have seen in the discussion prior to the statement of the theorem thatq−

12 Tr − q

12 Tid is an element fixed by ι. It is also routine to verify that it satisfies the conditions of

the theorem. Hence:Cr = q−

12 Tr − q

12 Tid (3.5.2)

Thus the theorem is verified in the case when w lies in S.

We proceed by induction on `(w). Assume that Cz is known for all z < w (so that x ≺ v andµ(x, v) makes sense for x, v ∈ Symn such that x ≤ v < w). Since `(w) > 0 there exists r ∈ R suchthat rw < w. Set v = rw so that Cv is known and rv = w. Now define:

Cw = CrCv −∑z≺vrz<z

µ(z, v)Cz (3.5.3)

Since ι is a homomorphism and ι(Cz) = Cz for all z < w (by induction) we have ι(Cw) = Cw. Also,since Cr = q

12 Tr − q

12 Tid it is clear that Cw is a A-linear combination of elements Ty satisfying

y ≤ w. It remains to show that the polynomials Px,w for x ≤ w lie in Z[q] and have the requireddegree. This requires a careful examination of the terms which arise in right hand side of (3.5.3).We can rewrite (3.5.3) using our inductive information and (3.5.2) as:

Cw =(q−

12 Tr − q

12 Tid

)∑y≤v

εyεvq12v q−1

y Py,vTy −∑x,z

x≤z≺vrz<z

µ(z, v)εxεzq12z q−1

x Px,zTx (3.5.4)

For fixed x we want to obtain an expression for Px,w by considering the right hand side of (3.5.4).

First assume that rx > x. Then Tx emerges on the right hand side in three ways:

1) In the first sum as TrTrx = (q−1)Trx+qTx. In this case the coefficient is q12 εrxεvq

12v q−1

rx Prx,v.

2) In the first sum as TidTx. Here the coefficient is −q12 εxεvq

12v q−1

x Px,v.

32

Page 45: álgebras de hecke

3) In the second sum. Here the coefficient is −∑

µ(z, v)εxεzq12z q−1

x Px,z with the sum overthose z satisfying x ≤ z ≺ v and rz < z.

Since rx > x, `(rx) = `(x) + 1 and so q−1rx = q−1q−1

x . Also, εrxεv = εxεw and so the coefficient ofTx can be written as:

q−12 εxεwq

12v q−1

x Prx,v − q12 εxεvq

12v q−1

x Px,v −∑

zx≤z≺vrz<z

µ(z, v)εxεzq12z q−1

x Px,z

Using that q12w = q

12 q

12v , εxεv = −εxεw and εz = εw (since z ≺ v) the coefficient of Tx becomes:

εwεxq12wq−1

x

(Px,v + q−1Prx,v −

∑z

x≤z≺vrz<z

µ(z, v)q12z q

− 12

w Px,z

)

Equating with εwεxq12wq−1

x Px,w, cancelling εwεxq12wq−1

x and applying ι we obtain:

Px,w = Px,v + qPrx,v −∑

zx≤z≺vrz<z

µ(z, v)q− 1

2z q

12wPx,z if rx > x (3.5.5)

Now assume that rx < x. This time Tx emerges on the right hand side of (3.5.4) in four ways:

1) In the first sum as TidTx. In this case the coefficient is −q12 εxεvq

12v q−1

x Px,v.

2) In the first sum as TrTx = (q−1)Tx+qTrx. Here the coefficient is q−12 (q−1)εxεvq

12v q−1

x Px,v.

3) In the first sum as TrTrx = Tx. In this case the coefficient is q−12 εrxεvq

12v q−1

rx Prx,v.

4) In the second sum. As above the coefficient is −∑

µ(z, v)εxεzq12z q−1

x Px,z with the sumover those z satisfying x ≤ z ≺ v and rz < z.

Since rx < x we have `(rx) = `(x) − 1 and so q−1rx = qq−1

x and εrx = −εx. Adding the above fourcoefficients and substituting these identities yields the following coefficient of Tx:

εvq12v εxq−1

x

(− q

12 Px,v + (q

12 − q−

12 )Px,v − q

12 Prx,v

)−∑

zx≤z≺vrz<z

µ(z, v)εxεzq12z q−1

x Px,z

By a similar process to that used in the first case (simplifying and then equating coefficients of Tx

in (3.5.3)) we get:

Px,w = qPx,v + Prx,v −∑

zx≤z≺vrz<z

µ(z, v)q− 1

2z q

12wPx,z if rx < x (3.5.6)

If we define c to be 1 if rx > x and 0 if rx < x we can combine (3.5.5) and (3.5.6) into the followingexpression for Px,w:

Px,w = q1−cPx,v + qcPrx,v −∑

zx≤z≺vrz<z

µ(z, v)q− 1

2z q

12wPx,z (3.5.7)

33

Page 46: álgebras de hecke

Since z ≺ v = rw we have that `(w)− `(z) ≡ 0 (mod 2) and so q− 1

2z q

12w ∈ Z[q]. Hence (3.5.7) shows

that Px,w is indeed a polynomial in q. It remains to show that Px,w has the required boundeddegree.

We first examine the terms in the sum∑

µ(z, v)q12z q

12wPx,z. If x < z, then, by induction deg Px,z ≤

12(`(z)− `(x)− 1) and so:

deg µ(z, v)q− 1

2z q

12wPx,z ≤

12(`(w)− `(z)) +

12(`(z)− `(x)− 1)

=12(`(w)− `(x)− 1)

On the other hand, if x = z then Px,z = Px,x = 1 and so:

deg µ(z, v)q− 1

2z q

12wPx,z =

12(`(w)− `(x))

Hence, if x ≺ v and rx < x, a term of the form −µ(x, v)q12(`(w)−`(x)) occurs in the sum. Moreover,

this is the only occurrence of a term of degree greater than 12(`(w)− `(x)− 1).

We now consider the term qcPrx,v. If c = 1 then rx > x and so `(rx) = `(x) + 1. On the otherhand if c = 0 then rx < x and so `(rx) = `(x)− 1. Hence we may write `(rx) = `(x)− 1 + 2c. Byinduction:

deg qcPrx,v ≤12(`(v)− `(rx)− 1) + c

=12(`(w)− 1− `(x) + 1− 2c− 1) + c (since rw = v)

=12(`(w)− `(x)− 1)

Thus qcPrx,v never contributes a term of degree greater than 12(`(w)− `(x)− 1).

We now consider the term q1−cPx,v. In this case we have:

deg q1−cPx,v ≤12(`(v)− `(x)− 1) + (1− c)

=12(`(w)− `(x))− 1 + (1− c)

Hence we could have deg q1−cPx,v = 12(`(w)− `(x)) if Px,v has maximal degree and c = 0. This is

slightly larger than that permitted in the statement of the theorem. However, in this case x ≺ v(since Px,v has the maximal possible degree) and rx < x (since c = 0). So, from above, we havea term of the form −µ(x, v)q

12(`(w)−`(x)) occurring in the sum. This cancels the offending term in

q1−cPx,v since µ(x, v) is the coefficient of q12(`(w)−`(x)) in q1−cPx,v.

3.6 Multiplication Formulae

As mentioned in the introduction, the main motivation of Kazhdan and Lusztig in [17] was tobetter understand the representation theory of Hn(q). As such, it is vital to know how Hn(q) actson the Kazhdan-Lusztig basis via left and right multiplication. The following theorem gives thefirst indication of the unique properties of the Kazhdan-Lusztig basis:

34

Page 47: álgebras de hecke

Theorem 3.6.1. Let Cw be the Kazhdan-Lusztig basis element corresponding to w ∈ Symn andlet r ∈ S be a simple transposition. Then:

TrCw =

{−Cw if rw < w

q12 Crw + qCw + q

12

∑z≺wrz<z

µ(z, w)Cz if rw > w(3.6.1)

Proof. If rw > w then, by (3.5.3), we have:

Crw = CrCw −∑z≺wrz<z

µ(z, w)Cz

But we know (see (3.5.2)) that Cr = q−12 Tr − q

12 Tid and so we have:

Crw = q−12 TrCw − q

12 Cw −

∑z≺wrz<z

µ(z, w)Cz

Rearranging and multiplying by q12 yields the case rw > w.

The first identity (the case rw < w) is not so straightforward. We have (by (3.5.2)):

TrCr = q−12 ((q − 1)Tr + qTid)− q

12 Tr = q

12 Tr − q−

12 Tr + q

12 Tid − q

12 Tr = −Cr

And so we may assume, for induction, that the first identity in (3.6.1) is known for all y < wsatisfying ry < y. Now:

TrCw = Tr

((q−

12 Tr − q

12 Tid)Crw −

∑z≺rwrz<z

µ(z, rw)Cz

)= (q−

12 (q − 1)Tr + q

12 Tid − q

12 Tr)Crw +

∑z≺rwrz<z

µ(z, rw)Cz (by induction)

= (q12 Tr − q−

12 Tr + q

12 Tid − q

12 Tr)Crw +

∑z≺rwrz<z

µ(z, rw)Cz

= −(q−12 Tr − q

12 Tid)Crw +

∑z≺rwrz<z

µ(z, rw)Cz

= −CrCrw +∑

z≺rwrz<z

µ(z, rw)Cz

= −Cw

We would like to develop a right-hand version of Theorem 3.6.1. First we need to know how ∗ actson the Kazhdan-Lusztig basis. This provides some extra information about the Kazhdan-Lusztigpolynomials.

Proposition 3.6.2. Let w ∈ Symn. Then C∗w = Cw−1. Hence if y ≤ w then Py,w = Py−1,w−1 and

µ(y, w) = µ(y−1, w−1).

35

Page 48: álgebras de hecke

Proof. We know that ∗ and ι commute and so ι(C∗w) = ι(Cw)∗ = C∗

w. Hence C∗w is an ι-invariant.

Now Cw has the form:

Cw =∑y≤w

εyεwq12wq−1

y Py,wTy

Hence, by the definition of ∗, C∗w has the form:

C∗w =

∑y≤w

εyεwq12wq−1

y Py,wTy−1

Now noting that y ≤ w if and only if y−1 ≤ w−1, `(y−1) = `(y) and `(w−1) = `(w) we can rewritethis as:

C∗w =

∑y−1≤w−1

εy−1εw−1q12

w−1q−1y Py,wTy−1

Now deg Py,w ≤ 12(`(w)−`(y)−1) = 1

2(`(w−1)−`(y−1)−1) and hence C∗w satisfies all the properties

of Cw−1 . By uniqueness we must have C∗w = Cw−1 . Now we can write Cw−1 as:

Cw−1 =∑

y−1≤w−1

εy−1εw−1q12

w−1q−1y Py−1,w−1Ty−1

Comparing coefficients yields Py,w = Py−1,w−1 and hence µ(y, w) = µ(y−1, w−1).

It is now straightforward to derive the corresponding right-hand multiplication formula:

Corollary 3.6.3. Let r ∈ S and w ∈ Symn. Then

CwTr =

{−Cw if wr < w

q12 Cwr + qCw + q

12

∑z≺wzr<z

µ(z, w)Cz if wr > w(3.6.2)

Proof. Applying ∗ to (3.6.1) yields:

(TrCw)∗ = Cw−1Tr ={−C∗

w if rw < w

q12 C∗

rw + qC∗w + q

12∑

µ(z, w)C∗z if rw > w

={−Cw−1 if rw < w

q12 Cw−1r + qCw−1 + q

12∑

µ(z, w)Cz−1 if rw > w

Now if z ≺ w then µ(z−1, w−1) = µ(z, w) by the previous Proposition and so z−1 ≺ w−1. Also, ifrz < z then z−1r < z−1 (by Corollary 1.3.3) and similarly for w. Hence, replacing z−1 by z andw−1 by w we get the result.

3.7 Notes

1. There are two very different proofs that the standard basis is indeed a basis for Hn(q). Mostauthors define Hn(q) as the associative algebra generated by either Ti or Tw subject to therelations (3.1.1) or (3.1.2) respectively. They then demonstrate a large endomorphism algebraof Hn(q) which implies that the Tw are indeed a basis. For this approach see, for example,Mathas [24] or Humphreys [15].

36

Page 49: álgebras de hecke

An outline of the second proof is as follows. Let G denote the general linear group of n × nmatrices over the finite field Fq. Let B denote the subgroup of upper triangular matrices.Now, inside the group algebra CG of G we consider the ‘double coset algebra’ [B]CG[B] (where[B] denotes the sum of all elements in B). If we write N for the subgroup of permutationmatrices it is not hard to show that G has a ‘Bruhat decomposition’ as the disjoint unionof double cosets of the form BwB with w ∈ N . Hence, [B]CG[B] has a basis consistingof elements of the form [BwB] where w ∈ Symn (in which we regard a permutation as apermutation matrix in the natural way). Now, if we normalise by defining eid = (1/|B|)[B]and ew = (1/|B|)[BwB] then it can be shown that the multiplication of these basis elementsis given by:

erew =

{erw if `(rw) > `(w)(q − 1)ew + qerw if `(rw) < `(w)

Thus, for all primes powers q we have a homomorphism from Hn(q) onto [B]CG[B] by sendingTw to ew and evaluating f ∈ Z[q, q−1] at q. Hence if a non-trivial linear relation

∑awTw =

0 holds in Hn(q) then∑

aw(q)ew = 0 also holds between the corresponding elements of[B]CG[B] (which we know form a basis). Hence (q − q) divides aw for all aw. But this holdsfor infinitely many prime powers q. Hence all the coefficients are 0 and so the {Tw} form abasis. A more detailed outline as to the derivation of generators and relations for [B]CG[B]can be found in Exercise 24 of Bourbaki [3].

2. The above realisation of the Hecke algebra as a ‘double coset algebra’ is the origin of the term‘Hecke algebra’. It is easy to see that, with G and [B] as above EndG(CG[B]) ∼= [B]CG[B].This is how the Hecke algebra first arose: during attempts to decompose the representationobtained by inducing the trivial representation from B to G. See Iwahori [16].

3. In Section 3.1 it was mentioned that when q = 1 the Hecke algebra is isomorphic to the groupalgebra of the symmetric group. The formal term for ‘assigning a value to q’ is specialisation.If ϕ : Z[q

12 , q−

12 ] → R is a homomorphism (most often an evaluation homomorphism onto a

subring of C) then Hn(q)⊗ϕ R is the algebra obtained by specialising at ϕ (or ϕ(q)). Hence,our statement is that Hn(q) ⊗q 7→1 C ∼= CSymn where CSymn denotes the symmetric groupalgebra over C. Tits has shown that a much stronger statement is true: that if a specialisationq 7→ z is semi-simple (over C this means that the specialisation is isomorphic to a direct sumof matrix algebras) then, in fact the specialisation remains isomorphic to CSymn. This isknown as the Tits deformation theorem. Tits’ proof is given in Steinberg [31].

4. Although Tits proved that semi-simple specialisations of Hn(q) are isomorphic to CSymn, noexplicit isomorphism is constructed in the proof. In [22], Lusztig uses the Kazhdan-Lusztigbasis to show that a certain left action of Hn(q) on CSymn commutes with the natural rightaction of CSymn. This establishes a canonical isomorphism between Hn(q) ⊗ Q(q

12 ) and

Q(q12 )Symn known as Lusztig’s isomorphism. The need to introduce a square root of q to

realise the isomorphism is used by many authors as further justification for enlarging A tothe ring Z[q−

12 , q

12 ].

5. The question of which values of z ∈ C yield semi-simple specialisations q 7→ z is answered byGreen [13]. If we define e(z) to be the first n such that zn = 1 (with e(z) = ∞ if no such nexists) then Hn(q)⊗q 7→z C is semi-simple if and only if e(z) > n or n ≥ 3 and z = 0. To give

37

Page 50: álgebras de hecke

some suggestion as to why this result should be true, it can be shown that:( ∑y∈Symn

Ty

)2=( n∏

i=1

qi − 1q − 1

)( ∑y∈Symn

Ty

)Hence

∑Ty is a central nilpotent element if e(z) < n. The implication the other way is more

difficult. Gyoja and Uno [14] offer an elegant proof using characters.

6. The systematic use of the anti-involution ∗ is motivated by its importance in the cellularalgebra approach to the Kazhdan-Lusztig basis. See Chapter 5.

7. The first proof of the existence and uniqueness of the Kazhdan-Lusztig basis follows Kazh-dan and Lusztig [17] very closely. However, Kazhdan and Lusztig are very terse in theirtreatment—their proof takes only one journal page! Our approach (which is similar to Dyer[4] and Humphreys [15]) is to try to elaborate on their argument.

8. The alternative proof, given in the appendix, of the existence and uniqueness of the Kazhdan-Lusztig basis is due to Lusztig [19]. There are other, more abstract proofs which only usebasic properties of the involution to deduce the existence and uniqueness of a special basis.See for example Chapter 8 of Du, Parshall and Wang [7].

38

Page 51: álgebras de hecke

4 Cells

In this chapter we introduce the cells associated to a basis of an algebra. We first give the abstractdefinition in terms of a ‘cell preorder’ and explain how each cell corresponds to a representation ofthe algebra. The rest of the chapter is devoted to deriving a more concrete (and, as it happens,the original) definition of the cells in the Hecke algebra. We also derive some technical propertiesessential to the further study of cells.

4.1 Cell Orders and Cells

Let H be an R-algebra which is free as a module over R and let {Ci |i ∈ I} be a basis for H. Forarbitrary a and j ∈ I we can express aCj uniquely as a linear combination of the basis elements.Define a relation ←

Lon I by declaring that i ←

Lj if there exists an a ∈ H such that Ci appears

with non-zero coefficient in aCj . We have 1Ci = Ci and so i←L

i for all i ∈ I. In other words, therelation ←

Lis reflexive. We similarly define i ←

Rj if there exists an a ∈ H such that Ci appears

with non-zero coefficient in Cja. We further define i←LR

j if either i←L

j or i←R

j.

We might hope that the relations ←L

, ←R

and ←LR

are transitive. However, in general this is not thecase. Instead we take the transitive closure of ←

Lby declaring that i ≤

L

j if there exists a sequence

i = i1 ←L

i2 ←L

. . . ←L

im = j. The resulting preorder is called the left cell preorder. We similarlydefine the right cell preorder ≤

R

and the two-sided cell preorder ≤LR

.

We can use the preorders ≤L

, ≤R

and ≤LR

to define equivalence classes on I. We define i ∼L

j if both

i ≤L

j and j ≤L

i. Similarly, we write i ∼R

j if i ≤R

j ≤R

i and i ∼LR

j if i ≤LR

j ≤LR

i. The equivalence

classes of ∼L

are called the left cells of I, those of ∼R

are the right cells and those of ∼LR

are thetwo-sided cells. Since ≤

L

, ≤R

and ≤LR

are preorders they induce partial orders on the corresponding

cells. These partial orders constitute the left cell poset, the right cell poset and the two-sided cellposet respectively.

It is often difficult to decide whether i ≤L

j. The following proposition shows that if we fix a set G

which generates H as an algebra, and consider gCk for each k ∈ I and g ∈ G then we obtain a setof relations which generate ≤

L

:

Proposition 4.1.1. Assume that G is a subset which generates H as an algebra. Then i ≤L

j if

and only if there exists a chain i = i1, i2, . . . , im = j such that Cik appears with nonzero coefficientin gCik+1

for some g ∈ G.

Proof. It is enough to show the existence of such a chain when i←L

j. Since i←L

j there exists ana ∈ H such that Ci appears with non-zero coefficient in aCj . Since G generates H we can writea =

∑λkGk where each Gk is a finite product of elements of G. Now since Ci appears in aCj we

must have that Ci appears in GkCj for some k. Now, Gk = g1g2 . . . gm for some gi ∈ G. Hencethere exists a chain i = i1, i2, . . . im = j with Ci1 appearing in g1Ci2 , Ci2 appearing in g2Ci3 etc.This shows the existence of such a chain. The other implication is by definition of ≤

L

.

Of course we get similar conditions for i ≤R

j and i ≤LR

j by allowing only right multiplication by

39

Page 52: álgebras de hecke

g ∈ G for ≤R

and multiplication on either side for ≤LR

.

We now give some examples of cells. First consider the cells of the standard basis of H2(q).Multiplication of the standard basis elements is given by:

Tid Ts1

Tid Tid Ts1

Tid Ts1 (q − 1)Ts1 + qTid

By considering T1Tid we have s1 ≤L

id and by considering T 2s1

we have that id ≤L

s1. Hence we have

only one left cell consisting of all of Sym2. It is not too hard to see that we only ever get one leftcell when considering Hn(q) with respect to the standard basis. For if we fix w ∈ Symn we haveTwTid = Tw and so w ≤

L

id. On the other hand, by Proposition 3.3.2, each standard basis element is

invertible and so we have T−1w Tw = Tid which yields id ≤

L

w. Hence w ≤L

id ≤L

w for all w ∈ Symn.

This implies that all of Symn lies in the same left-cell.

However, when we consider Hn(q) with respect to the Kazhdan-Lusztig basis we obtain a richcell structure. First consider the Kazhdan-Lusztig basis of H2(q). We have Cid = Tid and Cs1 =q−

12 Ts1 − q

12 Tid. A simple calculation (or use of the multiplication formulae in Section 3.6) yields

the following multiplication table:

Cid Cs1

Cid Cid Cs1

Cs1 Cs1 (−q12 − q−

12 )Cs1

Since {Cid, Cs1} is a basis it generates H2(q) as an algebra and so, by Proposition 4.1.1, the relations1 ≤

L

id generates ≤L

. Hence there are two left-cells: {id} and {s1}. The left cell poset looks like:

{id}

{s1}

We now consider the cells of H3(q) with respect to the Kazhdan-Lusztig basis. Using the recurrence(3.5.7) for the Kazhdan-Lusztig polynomials it is routine to verify that Px,w = 1 for all x ≤ w inSym3. Hence x ≺ w if and only if x ≤ w and `(w) − `(x) = 1 and in this case µ(x,w) = 1.We now have all the information we need in order to use the multiplication formulae in Section3.6. The following two tables (calculated using the multiplication formulae) show the effect of leftmultiplying the Kazhdan-Lusztig basis elements by Ts1 and Ts2 :

Cid Cs1 Cs2

Ts1 q12 Cs1 + qCid −Cs1 q

12 Cs1s2 + qCs2

Ts2 q12 Cs2 + qCid q

12 Cs2s1 + qCs1 −Cs2

Cs1s2 Cs2s1 Cs1s2s1

Ts1 −Cs1s2 q12 Cs1s2s1 + qCs2s1 + q

12 Cs1 −Cs1s2s1

Ts2 q12 Cs1s2s1 + qCs1s2 + q

12 Cs2 −Cs2s1 −Cs1s2s1

40

Page 53: álgebras de hecke

Hence ≤L

is generated by s1 ≤L

id, s2 ≤L

id, s1s2 ≤L

s2, s2s1 ≤L

id, s1s2s1 ≤L

s2s1, s2 ≤L

s1s2 and

s1 ≤L

s2s1. Hence the left cells are {id}, {s1, s2s1}, {s2, s1s2} and {s1s2s1}. The left cell poset

looks like:{id}

{s1, s2s1}

qqqqqqqqqqq{s2, s1s2}

MMMMMMMMMMM

{s1s2s1}

qqqqqqqqqq

MMMMMMMMMM

4.2 Representations Associated to Cells

Let H be an R-algebra with fixed basis {Cw |w ∈ I} and let ≤L

and ∼L

be the relations introduced

in the previous section. We will show that to every left cell we can associate a left H-module andhence a representation of H. Although we will not make it explicit, a similar process yields rightH-modules to every right cell and (H,H)-bimodules to every two-sided cell.

Fix x ∈ I and consider aCx for arbitrary a ∈ H. If Cy appears with non-zero coefficient then, bydefinition, we must have y ≤

L

x. So we can always write:

aCx =∑y≤x

ra(y, x)Cy (4.2.1)

for some ra(y, x) ∈ R. Hence, if we define H(≤L

w) to be the linear span of those Cy satisfying

y ≤L

w then (4.2.1) shows that H(≤L

w) is a left ideal of H.

If w ∈ I, write x <L

w if x ≤L

w and w �L

x. Then {y ∈ I |y <L

w} is the set of elements less than

w in the left cell preorder which are not in the same left cell. Now define H(<L

w) to be the linear

span of those Cy satisfying y <L

w. Now if x <L

w and a ∈ H is arbitrary then, by (4.2.1), we can

write aCx =∑

y≤L

x ra(y, x)Cy. Now, if y ≤L

x then y <L

w (otherwise we would have w ≤L

y ≤L

x

contradicting x <L

w). Hence, left multiplication by H maps H(<L

w) into itself and so H(<L

w) is

also a left ideal of H.

Clearly H(<L

w) is contained inside H(≤L

w) and so we can consider the quotient H(≤L

w)/H(<L

w).

Now, H(≤L

w) has a basis consisting of Cx with x ≤L

w. Similarly H(<L

w) has a basis consisting of

those Cy with y <L

w. Hence H(≤L

w)/H(<L

w) has a basis consisting of the images of those Cx with

x ≤L

w but x ≮L

w. But if x ≤L

w and x ≮L

w then w ≤L

x and so x ∼L

w. Hence H(≤L

w)/H(<L

w)

has a natural basis consisting of the images of those Cx with x ∼L

w. In the quotient module

H(≤L

w)/H(<L

w) the multiplication in (4.2.1) becomes:

aCx ≡∑y∼

Lw

ra(y, w)Cy (mod H(<L

w))

41

Page 54: álgebras de hecke

This is the cell module associated to the cell {x|x ∼L

w}. Clearly the cell module has rank equalto the number of elements in the cell. The representation afforded by the cell module is the cellrepresentation.

For example, we consider some of the representation afforded by the cells of H3(q) with respect tothe Kazhdan-Lusztig basis. To simplify notation in this example we will write H(≤

L

w) in place of

H3(q)(≤L

w). In the last section we saw that the left cells of H3(q) with respect to the Kazhdan-

Lusztig basis were {id}, {s1, s2s1}, {s2, s1s2} and {s1s2s1}. Let us calculate the cell representationassociated to {id}. Reducing the calculations of the previous section modulo H(<

Lw) we have:

Ts1Cid = qCid (mod H(<L

w))

Ts2Cid = qCid (mod H(<L

w))

Thus the cell {id} affords the representation Tw 7→ qw. It is not too hard to see from the multipli-cation formulae in Section 3.6 that this is a general fact: id always lies in a cell by itself and affordsthe ‘q-trivial’ representation Tw 7→ qw.

For a slightly more complicated example consider the cell {s1, s2s1}. Our calculations of theprevious section yield:

Ts1Cs1 = −Cs Ts2Cs1 = qCs1 + q12 Cs2s1

Ts1Cs2s1 = q12 Cs1 + qCs2s1 + q

12 Cs1s2s1 Ts2Cs2s1 = −Cs2s1

When we pass to the quotient H(≤L

s1)/H(<L

s1) the only basis element which lies in H(<L

s1) is

Cs1s2s1 . Thus the representing matrices of Ts1 and Ts2 are:

Ts1 7→(−1 q

12

0 q

)Ts1 7→

(q 0q

12 −1

)We multiply these to get the matrices of the other standard basis elements of Hn(q):

Ts1s2 7→

(−q q

32

−q12 0

)Ts2s1 7→

(0 q

32

q32 −q

)Ts1s2s1 7→

(0 −q

32

−q32 0

)

4.3 Some Properties of the Kazhdan-Lusztig Polynomials

We now explore the preorders ≤L

, ≤R

and ≤LR

introduced in the last two sections in the context

of the Hecke algebras. The goal of this chapter is to derive a more concrete formulation of therelations ≤

L

, ≤R

and ≤LR

. To get started, however, we need to develop some technical properties of

the Kazhdan-Lusztig polynomials.

The following is a formalisation of the inductive formula for the Kazhdan-Lusztig polynomialsdeveloped during the proof of Theorem 3.5.1:

Proposition 4.3.1. If x ≤ w and w 6= id we can find r ∈ S such that rw < w. If we then definecx by:

cx ={

1 if rx > x0 if rx < x

42

Page 55: álgebras de hecke

Then we have an inductive identity for Px,w:

Px,w = q1−cxPx,rw + qcxPrx,rw −∑

zx≤z≺rw

rz<z

q− 1

2z q

12wµ(z, rw)Px,z (4.3.1)

We use this inductive formula to prove the following lemma:

Lemma 4.3.2. Let x, w ∈ Symn and r ∈ S.(i) If rw < w then Prw,w = 1 and so rw ≺ w and µ(rw,w) = 1.(ii) If rx < x and x � rw then Px,w = Prx,rw.(iii) If x < w and rw < w then Px,w = Prx,w.

Proof. For (i) note that if rw < w then r(rw) = w > rw and so crw = 1. The inductive formula(4.3.1) gives:

Prw,w = q0Prw,rw + qPw,rw −∑

zrw≤z≺rw

rz<z

q− 1

2z q

12rwµ(z, rw)Px,z

Now z ≺ rw implies z < rw and so the sum is empty. Also, we know Prw,rw = 1 and Pw,rw = 0since w > rw. Hence Prw,w = q0Prw,rw = 1.

For (ii) note that rx < x and so cx = 0. Since x � rw we have Px,rw = 0 and there are no zsatisfying x ≤ z ≺ rw. Hence the inductive formula gives Px,w = Prx,rw.

For (iii) we use induction on `(w). If `(w) = 0 then w = id and so w does not satisfy the conditionsof the theorem. If `(w) = 1 then w = t for some t ∈ S and rt < t forces r = t. Hence, in thiscase the statement is that Px,w = Pid,t = Pt,t = 1 but this is a special case of (i). So assume,for induction that Px,z = Prx,z for all z satisfying `(z) < `(w) and rz < z. We want to showthat Px,w = Prx,w under the assumption that rw < w. First note that if z satisfies rz < z andx ≤ z ≺ rw then, by Lemma 1.3.1, either rx ≤ z or rx ≤ rz. Hence in either case rx ≤ z ≺ rw(since rz < z). Conversely, if z satisfies rz < z and rx ≤ z ≺ rw then x ≤ z ≺ rw by an identicalargument. We have therefore shown:

{z ∈ Symn |rz < z, x ≤ z ≺ rw} = {z ∈ Symn |rz < z, rx ≤ z ≺ rw} (4.3.2)

Now, by the inductive identity:

Px,w = q1−cxPx,rw + qcxPrx,rw −∑

zx≤z≺rw

rz<z

q− 1

2z q

12wµ(z, rw)Px,z

= qcxPrx,rw + q1−cxPx,rw −∑

zrx≤z≺rw

rz<z

q− 1

2z q

12wµ(z, rw)Prx,z (by induction and (4.3.2))

= q1−crxPrx,rw + qcrxPx,rw −∑

zrx≤z≺rw

rz<z

q− 1

2z q

12wµ(z, rw)Prx,z (since crx = 1− cx)

= Prx,w

This lemma allows us to prove:

43

Page 56: álgebras de hecke

Proposition 4.3.3. Let w ∈ Symn and r ∈ S such that rw > w. Then the only element x ∈ Symn

satisfying w ≺ x and rx < x is rw, and in this case µ(w, x) = 1.

Proof. Assume that x 6= rw. By assumption rx < x and so, by the lemma above we have Pw,x =Prw,x. Now:

deg Pw,x = deg Prw,x ≤12(`(x)− `(rw)− 1) =

12(`(x)− `(w)− 2) <

12(`(x)− `(w)− 1)

Hence w ⊀ x. On the other hand if x = rw then the above lemma applies to yield w ≺ x andµ(w, x) = µ(w, rw) = 1.

4.4 New Multiplication Formulae

Recall that in Chapter 1 we defined the left and right descent sets of a permutation w ∈ Symn

as the sets L(w) = {r ∈ S |rw < w} and R(w) = {r ∈ S |wr < w}. Using this notation we canrewrite the multiplication formula of Theorem 3.6.1 as:

TrCw =

{−Cw if r ∈ L(w)

q12 Crw + qCw + q

12

∑z≺w

r∈L(w)

µ(z, w)Cz if r /∈ L(z)(4.4.1)

We want to simplify this further. Recall that µ(x, y) is only defined if x ≤ y. We extend thedefinition of µ by defining µ(x, y) = µ(y, x) if x ≥ y and µ(x, y) = 0 if x and y are incomparable.Thus µ(x, y) is defined, and symmetric in x and y, for all x 6= y. We say that x is joined to y, andwrite x—y, if x 6= y and µ(x, y) 6= 0.

Now, fix r ∈ S and w ∈ Symn such that rw > w and consider the sum:∑z—w

r∈L(z)

µ(z, w)Cz

Now z—w if and only if µ(z, w) 6= 0 and so z ≺ w or w ≺ z. Hence, by symmetry of µ we have:∑z—w

r∈L(z)

µ(z, w)Cz =∑w≺z

r∈L(z)

µ(w, z)Cz +∑z≺w

r∈L(z)

µ(z, w)Cz

Now consider which terms emerge in the first sum on the right hand side. If Cz appears we have thatrw > w (by assumption), rz < z (since r ∈ L(z)) and w ≺ z. Hence we can apply Proposition 4.3.3of the previous section to conclude that the only term in the first sum is rw and that µ(w, rw) = 1.Hence: ∑

z—wr∈L(z)

µ(z, w)Cz = Crw +∑z≺w

r∈L(z)

µ(w, z)Cz (4.4.2)

Substituting (4.4.2) into (4.4.1) we have:

TrCw =

{−Cw if r ∈ L(w)

qCw + q12

∑z—w

r∈L(z)

µ(z, w)Cz if r /∈ L(w)(4.4.3)

By applying ∗ (noting that µ(z−1, w−1) = µ(z, w) and L(w−1) = R(w)) we obtain the right-handedidentity:

44

Page 57: álgebras de hecke

CwTr =

{−Cw if r ∈ R(w)

qCw + q12

∑z—w

r∈R(z)

µ(z, w)Cz if r /∈ R(w)(4.4.4)

4.5 New Definitions of the Cell Preorders

Equipped with the new multiplication formulae of the previous section we can give explicit condi-tions in terms of descent sets and the joins relation for the preorders ≤

L

, ≤R

and ≤LR

. We start by

asking: under what conditions does a non-zero coefficient of Cz emerge in TrCw for some r ∈ S? Ifr ∈ L(w) then TrCw = −Cw and we only get a non-zero coefficient of Cw. If r /∈ L(w) then (4.4.3)shows that we get a non-zero coefficient of Cz if and only if z—w and r ∈ L(z). Hence, we canget a non-zero coefficient of Cz by multiplying Cw on the left by some Tr if and only if z—w andL(z) * L(w).

Similarly, (4.4.4) shows that we can get a non-zero coeffcient of Cz by multiplying Cw on theright by some Tr if and only if z—w and R(z) * R(w). Since {Tr |r ∈ S} generates Hn(q) wecan apply Proposition 4.1.1 to conclude that the relations {x ≤

L

y |x—y and L(x) * L(y)} and

{x ≤R

y |x—y and R(x) * R(y)} generate ≤L

and ≤R

respectively. Lastly, we have x ≤LR

y if and only

if there exists a chain x = x1, x2, . . . , xm = y such that either xi ≤L

xi+1 or xi ≤R

xi+1 for all

1 ≤ i < m. We have therefore shown:

Proposition 4.5.1. Let x, y ∈ Symn and consider the preorders ≤L

, ≤R

and ≤LR

with respect to the

Kazhdan-Lusztig basis. Then:

1. We have x ≤L

y if and only if there exists a chain x = x1—xw— . . .—xm = y such that

L(xi) * L(xi+1) for all 1 ≤ i < m.

2. We have x ≤R

y if and only if there exists a chain x = x1—xw— . . .—xm = y such that

R(xi) * R(xi+1) for all 1 ≤ i < m.

3. We have x ≤LR

y if and only if there exists a chain x = x1—xw— . . .—xm = y such that either

L(xi) * L(xi+1) or R(xi) * R(xi+1) for all 1 ≤ i < m.

If x ≤L

y then the above Proposition shows there exists a chain x = x1—xw— . . .—xm = y

such that L(xi) * L(xi+1) for all 1 ≤ i < m. Now µ(x, y) = µ(x−1, y−1) and so x—y if andonly if x−1—y−1. Hence we also have a chain x−1 = x−1

1 —x−1w — . . .—x−1

m = y−1 such thatL(xi) = R(x−1

i ) * R(x−1i+1) = L(xi+1) for all 1 ≤ i < m. In other words, x−1 ≤

R

y−1. An identical

argument shows that if x ≤R

y then x−1 ≤L

y−1. Hence:

Corollary 4.5.2. Let x, y ∈ Symn. Then x ≤L

y if and only if x−1 ≤R

y−1. Hence x ∼L

y if and only

if x−1 ∼R

y−1.

The last result of this section (which uses the above characterisation of ≤L

and ≤R

) will be important

in the next Chapter:

45

Page 58: álgebras de hecke

Proposition 4.5.3. Let x, y ∈ Symn.(i) If x ≤

L

y then R(x) ⊃ R(y). Hence if x ∼L

y then R(x) = R(y).

(i) If x ≤R

y then L(x) ⊃ L(y). Hence if x ∼R

y then L(x) = L(y).

Proof. If x ≤L

y then there exists a chain x = x1—x2— . . .—xm = y such that L(xi) * L(xi+1) for

all 1 ≤ i < m. Hence, if we can show that xi—xi+1 and L(xi) * L(xi+1) implies R(xi) ⊃ R(xi+1)we will have (i). So assume x—y and L(x) * L(y). Then, there are two possibilities:

Case 1: x ≺ y. Suppose, for contradiction, that r ∈ R(y)\R(x). Then r ∈ L(y−1)\L(x−1) and sox−1 ≺ y−1 and rx−1 > x−1 but ry−1 < y−1. By Proposition 4.3.3 this forces x−1 = ry−1 and soxr = y. But then, by Lemma 1.4.3, we have L(x) ⊂ L(xr) = L(y). This contradicts L(x) * L(y).Hence R(y)\R(x) = ∅. In other words R(x) ⊃ R(y).

Case 2: y ≺ x. Choose r ∈ L(x)\L(y). Then ry > y and rx < x and so, by Proposition 4.3.3, wemust have x = ry. Hence R(x) = R(ry) ⊃ R(y) by Lemma 1.4.3.

Hence (i) is proven. For (ii) note that if x ≤R

y then x−1 ≤L

y−1 and so R(x−1) = L(x) ⊃ L(y) =

R(y−1).

4.6 Notes

1. Our approach to the definition of the cells in the Hecke algebra is not the standard one.Most authors use the conditions of Proposition 4.5.1 to define the cell preorders and thenremark that the multiplication formulae show that left multiplication by arbitrary a ∈ Hn(q)maps Cw into the A-span of Cx satisfying x ≤

L

w. Our approach (which was suggested by

Fishel-Grojnowski [8]) takes longer to develop but is more motivated.

2. In the first two sections we have tried to use as much of the language of cellular algebras aspossible; hence the terms ‘cell module’ and ‘cell representation’ and the notation H(≤ w).This is intended to motivate their introduction in the next chapter.

3. The elegant proof of part (ii) of Lemma 4.3.2 (that Px,w = Prx,w if rw < w) is due to Shi[28]. Most authors use a cumbersome expansion of the identity TrCw = −Cw if rw < w toderive the result.

4. The multiplication formulae developed in Section 4.4 suggest that only a basic set of infor-mation is needed to determine the action of Tr on Cw. We only need to know the appropriatedescent sets of each element, which elements are joined (by —), and what integer value corre-sponds to each joined pair (the µ function). This suggests that a graph might be constructedto carry all the necessary information. This is the approach taken by Kazhdan-Lusztig [17].They define a W -graph as a graph with vertices X and edge set Y such that each vertex x ∈ Xis labelled with a subset Ix of the simple transpositions and each edge {x, y} is labeled withan integer µ(x, y). This graph is subject to the requirement that, if M is the free A-moduleon the vertices of the graph, then defining:

τr(x) =

{−x if r ∈ Ix

qx + q12

∑{x,y}∈Y

r∈Iy

µ(y, x)y if r /∈ Ix

46

Page 59: álgebras de hecke

yields a representation of Hn(q) on M via Tr 7→ τr. The results of this chapter show that ifwe let X = Symn, Y = {{x, y}|x—y}, Ix = L(x) or R(x) and label each edge {x, y} withµ(x, y) then we obtain a W -graph. It is customary to place the integers corresponding to theelements of the descent set in circles. Hence, if L(w) = {s1} we would represent this as /.-,()*+1 .For example, in the case n = 3 our W -graph looks like:

?>=<89:;∅id

?>=<89:;1s1

1uuuuuuuu ?>=<89:;2 s2

1IIIIIIII

?>=<89:;1s1s2

1

1 jjjjjjjjjjjjjjjjj ?>=<89:;2 s2s1

1TTTTTTTTTTTTTTTTT1

ONMLHIJK1, 2 s1s2s1

1IIIIIIII

1uuuuuuuu

(Once one becomes accustomed to W -graphs much unnecessary information can be omitted).If we consider the full subgraph consisting of vertices belonging to a particular left cell we geta W -graph. The representation which it affords is the cell representation.

47

Page 60: álgebras de hecke

5 The Kazhdan-Lusztig Basis as a Cellular Basis

In this chapter we start by defining a cellular algebra and then spend the rest of the chaptercompleting our study of the cells in Hn(q) with the goal of showing that Hn(q) is a cellular algebra.We cannot complete this entirely via elementary means: in showing that no non-trival relation canhold between two distinct left cells within a two-sided cell we must appeal to a theorem of Kazhdanand Lusztig which we are unable to prove.

5.1 Cellular Algebras

In the previous chapter we introduced the cell preorders and showed how we can use them to definecells which, in turn, lead to representations of the algebra. However, given an arbitrary algebrawith basis the concept of cells is far too general to be any use. We have seen, for example, thatdifferent bases of the same algebra can yield very different cell structures: with respect to thestandard basis of Hn(q) there is only one left cell; whereas with respect to the Kazhdan-Lusztigbasis the left cell structure gives a partitioning at least as fine as that given by considering rightdescent sets (Proposition 4.5.3). Also, even if a fixed basis yields an interesting cell structure, wecannot be certain that we fully understand the representation theory of the algebra. We would liketo know, for example, whether the representations afforded by the cells are irreducible and whichcells afford isomorphic representations.

It is for this reason that we seek a class of algebras with fixed basis such that the resulting cellstructure is interesting and regular enough that questions of irreducibility and equivalence can beaddressed. Such a class of algebras is provided by Graham and Lehrer’s [12] definition of a ‘cellularalgebra’ which we now define. Let H be an R-algebra that is free as an R-module. Suppose that Λ isa finite poset such that, for each λ ∈ Λ, we are given a finite set M(λ). Suppose further, that for eachλ ∈ Λ and P,Q ∈ M(λ), there is an element Cλ

P,Q ∈ H such that {CλP,Q |λ ∈ Λ and P,Q ∈ M(λ)}

form an R-basis for H. We call {CλP,Q} a cellular basis if:

(C1) The R-linear map ∗ defined by (CλP,Q)∗ = Cλ

Q,P is an anti-involution of H.(C2) Let H(< λ) be the R-span of those elements Cµ

U,V with µ < λ in Λ and U, V ∈M(µ). Then,for all a ∈ H we have:

aCλP,Q ≡

∑P ′∈M(λ)

ra(P ′, P )CλP ′,Q (mod H(< λ))

where ra(P ′, P ) ∈ R is independent of Q.A cellular algebra is an algebra which has a cellular basis.

The simplest example of a cellular algebra is provided by the algebra R of n × n matrices over afield k. If we let Λ = {n}, M(n) = {1, 2, . . . , n} and define Cn

i,j = eij where eij is the (i, j)th matrixunit, then a simple calculation shows that if a =

∑λijeij ∈ R is arbitrary then (C2) is satisfied:

aCnk,l =

∑i,j

λijeijekl =∑

i

λikeil =∑

i

λikCni,l (5.1.1)

The map (Cni,j)

∗ = Cnj,i sends eij to eji and hence is the transpose map. The identity (ab)T = bT aT

shows that this is an anti-involution and hence (C1) is satisfied.

Let us consider what the axioms mean for the left, right and two-sided cells of H. If we defineH(≤ λ) to be the R-submodule generated by Cµ

U,V with µ ≤ λ then (C2) shows that H(≤ λ) is a

48

Page 61: álgebras de hecke

left ideal of H. Applying ∗ to (C2) yields:

CλQ,P a∗ ≡

∑P ′∈M(λ)

ra(P ′, P )CλQ,P ′ (mod H(< λ))

Hence H(≤ λ) is also a right ideal and so H(≤ λ) is a two sided ideal of H. Thus, if CµU,V and Cλ

P,Q

lie in the same two-sided cell then CµU,V ∈ H(≤ λ) and Cλ

P,Q ∈ H(≤ µ). Hence λ ≤ µ ≤ λ and soλ = µ. Hence, we can think of Λ as indexing the two-sided cells.1

Now fix λ ∈ Λ and P,Q ∈ M(λ). Then (C2) shows that if we left multiply CλP,Q by a ∈ H we get

a linear combination of CλP ′,Q for P ′ ∈M(λ) as well as some terms in H(< λ). Hence, if Cλ

P,Q andCλ

U,V lie in the same left cell then we must have Q = V . Similarly, if CλP,Q and Cλ

U,V lie in the sameright cell we must have P = U . Hence we can think of pairs λ ∈ Λ and Q ∈M(λ) as indexing theleft cells (and similarly for right cells).

This is made a little clearer by again considering the case when R is the algebra of n× n matricesover a field k and Cn

i,j is the cellular basis introduced above. It is easily seen that Cni,j ∼

LCn

k,l if andonly if j = l and Cn

i,j ∼R

Cnk,l if and only if i = k. If 1 ≤ i, j ≤ n then we can depict the left cell

indexed by i and the right cell indexed by j as:

Cn1,1 Cn

1,2 . . . Cn1,i . . . Cn

1,n

Cn2,1 Cn

2,2 . . . Cn2,i

∼L

. . . Cn2,n

......

. . ....

Cnj,1

∼R Cn

j,2

∼R Cn

j,n

......

. . ....

Cnn,1 . . . . . . Cn

n,i

∼L

. . . Cnn,n

This situation is typical in a cellular algebra. We can imagine each quotient module H(≤ λ)/H(< λ)as a ‘deformed matrix algebra’ which looks something like the above.

Now fix λ ∈ Λ and Q ∈M(λ). If W is the free module on {CP,Q |P ∈M(λ)} then (C2) shows thatwe get a representation of H on W by defining:

aCP,Q =∑

P ′∈M(λ)

ra(P, P ′)CP ′,Q

Now, axiom (C2) states that ra(P, P ′) is independent of Q. Thus we get isomorphic representa-tions for all choices of Q ∈ M(λ). Hence it makes sense to define the cell representation of Hcorresponding to λ as the the left module W (λ) with free R-basis {CP |P ∈ M(λ)} and H actiongiven by:

aCP =∑

P ′∈M(λ)

ra(P ′, P )CP ′

1Note that it is not a consequence of the axioms that CλP,Q and Cλ

U,V lie in the same two-sided cell.

49

Page 62: álgebras de hecke

Due to limitations of space we cannot discuss cellular algebras in any more depth. However, it canbe shown that, over a field, certain quotients of W (λ) for all λ ∈ Λ constitute a full set of pairwiseinequivalent irreducible representations of H. Thus, the notion of a cellular algebra does indeedaddress (and answer) the questions raised during the introduction to this section. The reader isreferred to Graham-Lehrer [12] for a more detailed account of the properties of cellular algebras aswell as a number of interesting examples.

5.2 Elementary Knuth Transformations

Our goal for the rest of this chapter it to show that the Kazhdan-Lusztig basis is a cellular basis forHn(q). It is perhaps surprising that standard tableaux provide a useful combinatorial frameworkin which to discuss the cellular structure. The first indication of the usefulness of tableaux in thiscontext is in a result that we will show in this section: if x and y have the same Q-symbol then xand y lie in the same left cell.

Recall from Chapter 2 that if x and y have the same Q-symbol, then, by the Symmetry Theorem x−1

and y−1 have the same P -symbol. Furthermore, if x−1 and y−1 have the same P -symbol then theyare Knuth equivalent and hence can be related by a sequence of elementary Knuth transformations(Theorem 2.6.4). It is these elementary Knuth transformations that provide the key to the furtherstudy of the cells in terms of tableaux.

In this section we develop functions between subsets of Symn which realise, algebraically, the ele-mentary Knuth transformations. However, before we introduce these functions we need a technicallemma which gives us information about certain cosets that arise repeatedly. If si, si+1 ∈ S aresimple transpositions let 〈si, si+1〉 denote the subgroup of Symn generated by si and si+1.

Lemma 5.2.1. For all w ∈ Symn there exists a unique element w0 of minimal length in the cosetw〈si, si+1〉. Moreover, w0 satisfies w0(i) < w0(i + 1) < w0(i + 2) and we can depict the coset, withthe induced order, as:

w0sisi+1si

w0sisi+1

ppppw0si+1si

NNNN

w0si

hhhhhhhhhhhhhw0si+1

VVVVVVVVVVVV

w0

NNNNNNppppp

Proof. Let w0 be an element of minimal length in w〈si, si+1〉. Now if w0(i) < w0(i+1) < w0(i+2)does not hold then we can right multiply by si or si+1 to reduce the length (by Lemma 1.1.1). Thisis a contradiction. Hence w0(i) < w0(i + 1) < w0(i + 2). Since w0(k) = w(k) if k /∈ {i, i + 1, i + 2},w0 is uniquely determined by the condition w0(i) < w0(i + 1) < w0(i + 2).

Since w0 is of minimal length we have w0 < w0si < w0sisi+1 and w0 < w0si+1 < w0si+1si

(otherwise we would have another element of length equal to w0). Also w0si+1 < w0sisi+1 andw0si < w0si+1si by considering reduced expressions (Proposition 1.3.2). Now w0si+1 < w0sisi+1

implies either w0si+1si ≤ w0sisi+1 or w0si+1si ≤ w0sisi+1si (Lemma 1.3.1). But w0si+1si �w0sisi+1 because they have the same length but are not equal. Hence w0sisi+1 < w0sisi+1si.Hence w0si+1si < w0sisi+1si (again by considering reduced expressions).

50

Page 63: álgebras de hecke

We will denote by w0 the unique element of minimal length in w〈si, si+1〉 and call it the distinguishedcoset representative.

Recall from Chapter 2 that if w = w1w2 . . . wn ∈ Symn an elementary Knuth transformation of wis a reordering of w according to one of the following patterns (where x < y < z):

. . . zxy . . .↔ . . . xzy . . .

. . . yxz . . .↔ . . . yzx . . .

We showed (in Theorem 2.6.4) that the P -symbols of x and y are equal if and only if x and y canbe linked by a sequence of elementary Knuth transformations.

Let us make some observations about the elementary Knuth transformations. Notice that, ifwiwi+1wi+2 are three consecutive letters of w ∈ Symn then it is not always possible to performan elementary Knuth transformation: if wi < wi+1 < wi+2 or wi > wi+1 > wi+2 then no ele-mentary Knuth transformations apply. Also note that, if it is possible to perform an elementaryKnuth transformation on wiwi+1wi+2 then there is only one possibility. Hence, given an elementw ∈ Symn and a sequence wiwi+1wi+2 upon which an elementary Knuth transformation can beapplied, the result of performing the elementary Knuth transformation is well-defined.

Suppose now that w = w1w2 . . . wn ∈ Symn and that we wish to perform a Knuth transformationon the subsequence wiwi+1wi+2 for some 1 ≤ i < n − 1. We cannot have wi > wi+1 > wi+2

or wi < wi+1 < wi+2 and hence we must have either wi < wi+1 > wi+2 or wi > wi+1 < wi+2.Now if wi < wi+1 > wi+2 then wsi > w and wsi+1 < w (Lemma 1.1.1). On the other hand, ifwi > wi+1 < wi+2 then wsi < w and wsi+1 > w. In other words, if w ∈ Symn is such that we canperform an elementary Knuth transformation on the i, i+1 and i+2 positions thenR(w)∩{si, si+1}contains one element. On the other hand, R(w) ∩ {si, si+1} contains one element then we musthave wi < wi+1 > wi+2 or wi > wi+1 < wi+2 (by Lemma 1.1.1 again) and hence an elementaryKnuth transformation is applicable. Hence, for 1 ≤ i < n− 1 we define:

Di = {w ∈ Symn |R(w) ∩ {si, si+1} contains one element}

Then w ∈ Di if and only if it is possible to perform an elementary Knuth transformation on thei, i + 1 and i + 2 positions of w.

Now, if w ∈ Di consider the coset w〈si, si+1〉 and let w0 be the distinguished coset representative.As above we can depict the coset as:

w0sisi+1si

w0sisi+1

vvvvvvw0si+1si

HHHHHH

w0si

kkkkkkkkkkkkkw0si+1

TTTTTTTTTTTT

w0

IIIIIIIuuuuuuu

Now consider the elements w0si, w0si+1, w0sisi+1 and w0si+1si. Right multiplication by si liftsw0si+1 and w0sisi+1 and lowers w0si and w0si+1si. On the other hand right multiplication bysi+1 lifts w0si and w0si+1si and lowers w0si+1 and w0sisi+1. Hence all of these elements are inDi. Now right multiplication by both si and si+1 lifts w0 and lowers w0sisi+1si and so neither w0

51

Page 64: álgebras de hecke

nor w0sisi+1si are in Di. Hence w must be one of the ‘middle’ elements w0si, w0si+1, w0sisi+1

or w0si+1si. The above comments also make it clear that either wsi ∈ Di or wsi+1 ∈ Di but notboth. In other words Di ∩ {wsi, wsi+1} contains precisely one element and so we can define a mapKi : Di → Di by:

Ki(w) = the unique element of Di ∩ {wsi, wsi+1}

Note that, if w ∈ Di then Ki(w) = wr for some r ∈ {si, si+1}. Then wr2 = w ∈ Di and soKi(Ki(w)) = w. We have therefore shown:

Lemma 5.2.2. Ki is an involution on Di.

The following proposition shows that the functions Ki for 1 ≤ i < n − 1 realise the elementaryKnuth transformations:

Proposition 5.2.3. Suppose that w ∈ Symn and that it is possible to perform an elementaryKnuth transformation on the i, i + 1 and i + 2 positions of w. Then w ∈ Di and Ki(w) is thepermutation obtained from w by performing the only possible elementary Knuth transformation onthe subsequence wiwi+1wi+2 of w.

Proof. We have already seen that w ∈ Di if and only if it is possible to perform an elementary Knuthtransformation on the i, i+1 and i+2 positions and that, in this case, only one elementary Knuthtransformation is possible. Now, consider the coset w〈si, si+1〉 and let w0 be the distinguished cosetrepresentative. Let x = w0(i), y = w0(i + 1) and z = w0(i + 2) so that w0 has the form . . . xyz . . .with x < y < z. Now, if wsi < w and wsi+1 > w then either w = w0si or w = w0si+1si. In thefirst case Ki(w) = w0sisi+1 and so Ki affords the elementary Knuth transformation . . . yxz . . . 7→. . . yzx . . . . In the second case Ki(w) = w0si+1 and so Ki affords the transformation . . . zxy . . . 7→. . . xzy . . . . Now Ki is an involution and so Ki maps . . . xzy . . . to . . . zxy . . . and . . . zxy . . . to. . . xzy . . . . Hence, the action of Ki agrees with the the elementary Knuth transformations forall arrangements of x, y, z in the i, i + 1 and i + 2 positions of w for which elementary Knuthtransformations are possible.

The following is immediate:

Corollary 5.2.4. Suppose that x, y ∈ Symn. Then the P -symbols of x and y are equal if andonly if there exists a sequence i1, i2, . . . im such that Kik−1

Kik−2. . .Ki1(x) ∈ Dik for all k and

y = KimKim−1 . . .Ki1(x)

Proof. We have seen (in Theorem 2.6.4) that the P -symbols of x and y are equal if and only ifthere exists a chain x = x1, x2, . . . , xm = y in which xk+1 is obtained from xk by performing anelementary Knuth transformation in the ik, ik + 1 and ik + 2 positions of xk for some ik. Now,from above xk ∈ Dik and xk+1 = Kikxk for all k. Hence Kik−1

Kik−2. . .Kik1

(x) ∈ Dik for all k andy = KimKim−1 . . .Ki1(x).

We can now begin to apply the function Ki to the cells in the Hecke algebra:

Lemma 5.2.5. If x ∈ Di then x ∼R

Ki(x).

52

Page 65: álgebras de hecke

Proof. Since Ki(x) = xsi or Ki(x) = xsi+1 we have µ(x, Ki(x)) 6= 0 by Lemma 4.3.2(i)2 and sox—Ki(x). If Ki(x) = xsi then si ∈ R(x) if and only if si /∈ R(Ki(x)). Similarly, if Ki(x) = xsi+1

then si+1 ∈ R(x) if and only if si+1 /∈ R(Ki(x)). But, since x and Ki(x) are in Di only oneof si and si+1 are elements of R(x) and similarly for R(Ki(x)). Hence R(x) * R(Ki(x)) andR(Ki(x)) * R(x). Hence x ∼

RKi(x).

This allows us to prove:

Proposition 5.2.6. Suppose x, y ∈ Symn have the same Q-symbol. Then x ∼L

y.

Proof. If x and y have the same Q-symbol then, by the Symmetry Theorem (Theorem 2.5.2), x−1

and y−1 have the same P -symbol. Hence, by Corollary 5.2.4, there exists a sequence i1, i2, . . . ,im such that Kik−1

. . .Ki1x−1 ∈ Dik for all k and y−1 = KimKim−1 . . .Ki1x

−1. Now from abovex−1 ∼

RKi1x

−1 ∼R

Ki2Ki1x−1 ∼

R. . . ∼

Ry−1. Hence x−1 ∼

Ry−1 and so x ∼

Ly (Corollary 4.5.2).

5.3 The Change of Label Map

In the previous section we saw that if x and y have the same Q-symbol then they lie in the sameleft cell. In the next section we will prove the remarkable converse, thus establishing that x and ylie in the same left cell if and only if their Q-symbols are the equal. It then follows easily that xand y are in the same two-sided cell if and only if P (x) and P (y) have the same shape.

Assuming this result, we can label the left cells within any two-sided cell by standard tableauxof shape λ. The next step in showing that the Kazhdan-Lusztig basis is cellular is to show thatthe representations afforded by the left cells within a two-sided cell are isomorphic. To do this weneed, for fixed Q1 and Q2 of shape λ, a map between the sets {(P,Q1)|P standard of shape λ}and {(P,Q2)|P }. The obvious map (which sends (P,Q1) to (P,Q2)) will be shown to yield anisomorphism of representations. We call this the change of label map.

To examine the effect of the change of label map on the left cells we need a way of realising themap algebraically. However, at this point it is not at all obvious how this might be achieved. Itturns out that if x, y ∈ Di, x ∼ (P,Q) and y ∼ (P ′, Q) then Ki(x) ∼ (P,R) and Ki(y) ∼ (P ′, R)for some standard tableau R. Hence, we can use chains of elementary Knuth transformations torealise the change of label map. For the moment, however, we will be content to investigate theeffects of elementary Knuth transormations on the function µ and relation ∼

L. The fact that the

elementary Knuth tranformation realise the change of label map will emerge as a corollary in thenext section.

If Ki does indeed realise the change of label map then it would send x ∼ (P,Q) and y ∼ (P ′, Q)to Ki(x) ∼ (P,R) and Ki(y) ∼ (P ′, R) for some standard tableau R. Hence, by the results ofthe previous section, we would have Ki(x) ∼

LKi(y). The aim of this section is to prove that this

holds in general. To show that if x, y ∈ Di and x ∼L

y implies Ki(x) ∼L

Ki(y) we need two facts:

that L(x) = L(Ki(x)) and that µ(x, y) = µ(Ki(x),Ki(y)). The first is easy: we have seen that

2Up until this chapter most identities involving Kazhdan-Lusztig polynomials have been developed with simpletranspositions etc. acting on the left. This reflects the conventional focus in the literature on left cells. However,due to the nature of the Knuth transformations, it will be more convenient to work on the right during this chapter.We will refer without comment to left-hand results from previous chapters. The conversion is always straightforwardusing the anti-involution ∗ and the fact that Px,w = Px−1,w−1 (Proposition 3.6.2).

53

Page 66: álgebras de hecke

x ∼R

Ki(x) and hence L(x) = L(Ki(x)) by Proposition 4.5.3. However, the second requires a long

and intricate proof. In the proof, we follow the original argument of Kazhdan-Lusztig [17] closely.

Proposition 5.3.1. Suppose that x, y ∈ Di.

(i) If x−1y ∈ 〈si, si+1〉 then x ≺ y if and only if Ki(y) ≺ Ki(x). In this case µ(x, y) =µ(Ki(y),Ki(x)) = 1.

(ii) If x−1y /∈ 〈si, si+1〉 then x ≺ y if and only if Ki(x) ≺ Ki(y). In this case µ(x, y) =µ(Ki(y),Ki(x)).

Proof. Let us fix some notation which we will use throughout the proof. If x ≺ y then Px,y hasdegree 1

2(`(y)− `(x)− 1). If P ′ ∈ Z[q] is another polynomial write Px,y ∼ P ′ if Px,y−P ′ has degreestrictly less than 1

2(`(y)− `(x)− 1). Thus Px,y ∼ P ′ if and only if P ′ has degree 12(`(y)− `(x)− 1)

and leading coefficient µ(x, y).

Since x ∈ Di we have either xsi < x and xsi+1 > x or xsi+1 < x and xsi > x. In order to dealwith these two configurations simultaneously we will use r and t to denote either si and si+1 orsi+1 and si. In each case we will define r and then assume that t is defined such that t ∈ {si, si+1}with t 6= r.

For (i) we have x ≺ y and x and y are in the same left coset of 〈si, si+1〉. Now, let w0 be thedistinguished coset representative of x〈si, si+1〉. Since there are only four elements of x〈si, si+1〉 inDi and x < y (since x ≺ y) we must have x = w0r for r ∈ {si, si+1} and hence either y = w0rt ory = w0tr. Thus µ(x, y) = 1 by Lemma 4.3.2(i). Now, since x = w0r we have Ki(x) = w0rt andKi(y) = w0r or Ki(y) = w0t. In either case Lemma 4.3.2(i) applies again to yield Ki(y) ≺ Ki(x)and µ(Ki(y),Ki(x)) = 1. The converse follows since Ki is an involution.

The proof of (ii) is more difficult. The proof is by cases:

Case 1: x−1Ki(x) = y−1Ki(y).

Hence Ki(x) = xr and Ki(y) = yr for some r ∈ {si, si+1}. Now, if xr > x and yr < y thenProposition 4.3.3 applies to give that x = yr and so x and y lie in the same left coset of 〈si, si+1〉.This is a contradiction. On the other hand if xr < x and yr > r then xt > x and yt < t (sincex, y ∈ Di) and so Proposition 4.3.3 applies again to give the same contradiction. Hence, eitherxr < x and yr < y or xr > x and yr > y. Throughout, we will argue the equivalence of x ≺ yand Ki(x) ≺ Ki(y). Hence we can assume without loss of generality that xr < x and yr < y. (Ifxr > x and yr > y then we can replace x by xr and y by yr to get that xr = Ki(x) ≺ yr = Ki(y)if and only if Ki(xr) = x ≺ Ki(yr) = y.) So we are in the following situation:

xt

xtr

wwwwwwwwx

DDDDDDDD

Ki

��xrtr

llllllllllllllllxr

RRRRRRRRRRRRRRRR

xrt

GGGGGGGG

zzzzzzzz

yt

ytr

yyyyyyyyy

BBBBBBBB

Ki

��yrtr

mmmmmmmmmmmmmmmmyr

QQQQQQQQQQQQQQQQ

yrt

EEEEEEEE

||||||||

54

Page 67: álgebras de hecke

If x � yr then Lemma 4.3.2(ii) gives Px,w = Pxr,wr and hence µ(x,w) = µ(xr,wr) and the resultfollows. So assume that x ≤ yr. Since xr < x we have cx = 0 in the inductive formula (4.3.1) andwe have:

Px,y ∼ qPx,yr + Pxr,yr −∑

zx≤z≺yr

zr<z

q− 1

2z q

12y µ(z, yr)Px,z

If either x ≺ y or xr ≺ yr then x ⊀ yr (since both x ≺ y and xr ≺ yr imply εx = −εy whereasx ≺ yr forces εx = εy) and hence we can rewrite the sum over all those z satisfying x < z ≺ yr andzr < z. Also, if either x ≺ y or xr ≺ yr then there is a term on one side of the equation with degreeat least 1

2(`(y)− `(x)− 1). Hence any terms of degree less than 12(`(y)− `(x)− 1) can be ignored.

Now, if x ⊀ z then deg Px,z < 12(`(z)− `(x)− 1) and so deg q

12z q

12y Px,y < 1

2(`(y)− `(x)− 1). Hencewe can ignore any z in the sum which do not satisfy x ≺ z. Also, if x < z then the only term of

degree 12(`(y)− `(x)− 1) in q

− 12

z q12y Px,z is q

− 12

z q12y multiplied by the leading coefficient of Px,z (which

has coefficient µ(x, z)). Hence we can replace Px,z with µ(x, z)q12(`(z)−`(x)−1). Thus we have:

Px,y ∼ qPx,yr + Pxr,yr −∑

zx≺z≺yr

zr<z

q− 1

2z q

12y µ(z, yr)µ(x, z)q

12(`(z)−`(x)−1)

∼ qPx,yr + Pxr,yr −∑

zx≺z≺yr

zr<z

q12(`(y)−`(x)−1)µ(z, yr)µ(x, z)

Now assume that z appears in the sum. Then, if t ∈ R(z) then we have zt < z, xt > x and x ≺ z.Then Proposition 4.3.3 applies to yield that z = xt. On the other hand if t /∈ R(z) then zt > z,yrt < yr and y ≺ yrt. Again, Proposition 4.3.3 applies forcing z = yrt. Hence z = xt or z = yrt.But we require r ∈ R(z). Now r ∈ R(xt) but r /∈ R(yrt). Hence the only possibility is z = xt.Now, µ(x, xt) = 1 by Lemma 4.3.2(i) and so our expression becomes:

Px,y ∼ Pxr,yr + qPx,yr − q12(`(y)−`(x)−1)µ(xt, yr)

We have yrt < yr and so, by Lemma 4.3.2(iii), we have Px,yr = Pxt,yr. Hence:

qPx,yr = qPxt,yr ∼ q12(`(yr)−`(xt)−1)+1µ(xt, yr)

= q12(`(y)−`(x)−1)µ(xt, yr)

Thus qPx,yr − q12(`(y)−`(x)−1)µ(xt, yr) is a polynomial of degree less than 1

2(`(y)− `(x)− 1). HencePx,y ∼ Pxr,yr and so µ(x, y) = µ(Ki(x),Ki(y)).

Case 2: x−1Ki(x) 6= y−1Ki(y).

Hence Ki(x) = xr and Ki(y) = xt for r, t ∈ {si, si+1} with r 6= t. Now, if xr < x and yt < xthen xt > x (since x ∈ Di) and so Proposition 4.3.3 applies to yield x = yt. This contradictsthe fact that x and y lie in different left cosets of 〈si, si+1〉. Interchanging r and t yields a similarcontradiction if xr > x and yt > y. Hence either xr < x and yt > y or xr < x and yt > y. As inthe previous case, we can assume without loss of generality that xr < x and yt > y since we argue

55

Page 68: álgebras de hecke

that x ≺ y if and only if Ki(x) ≺ Ki(y). So we are in the following situation:

xt

xtr

wwwwwwwwx

DDDDDDDD

Ki

��xrtr

llllllllllllllllxr

RRRRRRRRRRRRRRRR

xrt

GGGGGGGG

zzzzzzzz

ytr

yrtr

yyyyyyyyyt

BBBBBBB

yrt

mmmmmmmmmmmmmmmmy

QQQQQQQQQQQQQQQKi

[[

yr

FFFFFFFF

||||||||

If xr ≮ yt then x ≮ y (otherwise xr < x < y < yt) and so µ(Ki(x)),Ki(y)) = µ(x, y) = 0. Sowe may assume that xr < yt. Now t ∈ R(xr) ∩ R(yt) and so xrt < y (Lemma 1.3.4). Similarlyxr < yt implies x < yt or x < ytr (Lemma 1.3.1) and so x < ytr.

Now assume that xr � y. Then xr � (yt)t = y and so, by Lemma 4.3.2(ii), Pxr,yt = Pxrt,y. Ifxrt ≺ y then (xrt)r > xrt but yr < y forcing xrt = yr (Proposition 4.3.3). This contradicts thefact that x and y do not lie in the same left coset of 〈si, si+1〉 and so we must have xrt ⊀ y. Hencedeg Pxrt,y < 1

2(`(y)− `(xrt)− 1) = 12(`(yt)− `(xr)− 1) and so xr ⊀ yt (since Pxr,yt = Pxrt,y). On

the other hand xr � y implies x � y (otherwise xr < x ≤ y) and so x ⊀ y. Thus if xr � y onlyone case can occur: µ(x, y) = µ(Ki(x),Ki(y)) = 0.

Now assume that xr ≤ y. Then (xr)t < xr and so cxr = 0 in the inductive formula (4.3.1). Thisyields:

Pxr,yt ∼ qPxr,y + Pxrt,y −∑

zxr≤z≺y

zt<z

µ(z, y)q− 1

2z q

12ytPxr,z

As in Case 1, Pxr,z does not have a large enough degree to contribute if xr ⊀ z and if xr ≺ z we

can replace q− 1

2z q

12ytPxr,z with q

12(`(yt)−`(xr)−1)µ(xr, z). We have also seen that xrt ⊀ y and so Pxrt,y

has degree less than 12(`(yt)− `(xr)− 1). Thus, we can rewrite our expression as:

Pxr,yt ∼ qPxr,y −∑

zxr≺z≺y

zt<z

µ(z, y)µ(xr, z)q12(`(yt)−`(xr)−1)

Let us consider which z emerge in the sum. If r ∈ R(z) then (xr)r > xr, zr < z and xr ≺ z forcingz = x. If r /∈ R(z) then zr > z, yr < y and z ≺ y forcing z = yr (both by Proposition 4.3.3). Butneither x not yr have t in their right descent set. Hence the sum is empty and we can concludethat Pxr,yt ∼ qPxr,y. Now yr < y and hence Pxr,y = Px,y by Lemma 4.3.2(iii). Hence Pxr,yt ∼ qPx,y.Hence xr ≺ yt if and only if x ≺ y and µ(Ki(x),Ki(y)) = µ(x, y).

As promised, we have:

56

Page 69: álgebras de hecke

Corollary 5.3.2. Suppose that x, y ∈ Di. Then x ≤L

y if and only if Ki(x) ≤L

Ki(y). Hence, if

x ∼L

y then Ki(x) ∼L

Ki(y).

Proof. Suppose first that x—y with L(x) * L(y). Then the above Proposition shows that Ki(x)—Ki(y).Also L(x) = L(Ki(x)) and L(y) = L(Ki(y)) since x ∼

RKi(x) and y ∼

RKi(y) (Lemma 5.2.5 and

Proposition 4.5.3). Hence Ki(x)—Ki(y) and L(Ki(x)) * L(Ki(y)). Now, if x ≤L

y then there exists

a chain x = x1—x2— . . .—xm = y with L(xj) * L(xj+1) for all j < m. Now, by assumption, xand y lie in Di and hence R(x)∩{si, si+1} and R(y)∩{si, si+1} contain precisely one element. Butsince R(x) ⊃ R(y) we must have R(x)∩{si, si+1} = R(y)∩{si, si+1}. Furthermore, since R(x) ⊃R(xj) ⊃ R(y) for all j (since x ≤

L

xj ≤L

y) we must have R(x)∩ {si, si+1} = R(xj)∩ {si, si+1} and

hence xj ∈ Di for all j. Hence, by the above arguments, Ki(x) = Ki(x1)—Ki(x2)— . . .—Ki(xm) =Ki(y) with L(Ki(xj−1)) * L(Ki(xj)) for all i < m and so Ki(x) ≤

L

Ki(y). If x ∼L

y then x ≤L

y and

y ≤L

x. Thus, Ki(x) ≤L

Ki(y) and Ki(y) ≤L

Ki(x) and so Ki(x) ∼L

Ki(y).

5.4 Left Cells and Q-Symbols

Equipped with the results of the previous section we can give a complete description of the leftcells in terms of the Q-symbol of the permutation.

Theorem 5.4.1. Let x, y ∈ Symn. Then x ∼L

y if and only if Q(x) = Q(y).

Before we commence the proof we recall some definitions and results from Sections 2.7 and 2.8which are central to the proof. If λ is a partition of n, we defined the column superstandardtableau, Sλ, as the tableau obtained from a diagram of λ by filling it with 1, 2, . . . n successivelydown columns. We defined the descent of a tableau P , denoted D(P ), as the set of i for which i+1occurs strictly below and weakly left of i in P and showed that, if w ∈ Symn, then si ∈ R(w) ifand only if i ∈ D(Q(w)) (Proposition 2.7.1). Lastly, we showed that if the tableau descent of Pcontains the tableau descent of Sλ then the shape of P is dominated by λ and that Shape(P ) = λif and only if P = Sλ (Proposition 2.8.2). We will use these results without reference during theproof. The argument is based on Ariki [1].

Proof. We have already seen in Proposition 5.2.6 that if x and y have the same Q-symbol thenthey lie in the same left cell. It remains to show the converse.

Assume first that x ∼L

y with x ∼ (Sλ, Q) and y ∼ (Sµ, Q′) where Sλ and Sµ are column superstan-

dard tableau. Now, define x′ by x′ ∼ (Sλ, Sλ). Then x and x′ have the same P -symbol and henceare Knuth equivalent. Hence, by Corollary 5.2.4, there exists a sequence i1, i2, . . . im such that:

Kik−1Kik−2

. . .Ki1(x) ∈ Dik for all 1 ≤ k ≤ m

x′ = KimKim−1 . . .Ki1(x) (5.4.1)

Now, x ∼L

y and hence R(x) = R(y). Hence, y ∈ Di1 since x ∈ Di1 . Hence Ki(y) is a well defined

element and Ki(x) ∼L

Ki(y) by Corollary 5.3.2. Repeating this argument we see that Ki2Ki1(y) is

a well-defined element satisfying Ki2Ki1(x) ∼L

Ki2Ki1(y). Thus we may define y′ by:

y′ = KimKim−1 . . .Ki1(y) (5.4.2)

57

Page 70: álgebras de hecke

We have y′ ∼L

x′ and hence R(y′) = R(x′). Hence D(Q(y′)) = D(Q(x′)) = D(Sλ) and hence µ E λ.Now the above argument is perfectly symmetrical in x and y and so we can repeat it with x and yinterchanged to get λ E µ. Hence λ = µ. Now D(Q(y′)) = D(Sλ) and Q(y′) has shape λ forcingQ(y′) = Sλ. Hence y′ ∼ (Sλ, Sλ) and so y′ = x′. Applying Ki1Ki2 . . .Kim to (5.4.1) and (5.4.2) weget x = y since Ki is an involution.

Now let x ∼L

y with x ∼ (P (x), Q(x)) and y ∼ (P (y), Q(y)) be arbitrary. Let λ and µ be the shape

of P (x) and P (y) respectively. Define x and y by x ∼ (Sλ, Q(x)) and y ∼ (Sµ, Q(y)). Now, x ∼L

x

since x and x have the same Q-symbol and similarly y ∼L

y. Hence x ∼L

y and the above argument

applies to force x = y. Hence Q(x) = Q(x) = Q(y) = Q(y).

Using the Symmetry Theorem it is straightforward to extend this to the right and two-sided cells:

Corollary 5.4.2. Let x, y ∈ Symn.(i) Then x ∼

Ry if and only if P (x) = P (y).

(ii) Then x ∼LR

y if and only if P (x) and P (y) have the same shape.

Proof. For (i) we have x ∼R

y if and only if x−1 ∼L

y−1 (Corollary 4.5.2) which, from above, occurs if

and only if Q(x−1) = Q(y−1). By the Symmetry Theorem (Theorem 2.5.2) we have Q(x−1) = P (x)and Q(y−1) = P (y).

For (ii) note that (i) combined with the above theorem yields that if x ∼L

y or x ∼R

y then

Shape(P (x)) = Shape(P (y)). If x ∼LR

y then there exists a chain x = x1, x2, . . . , xm such

that xi ∼L

xi+1 or xi ∼R

xi+1 for 1 ≤ i < m. Hence Shape(P (x)) = Shape(P (x1)) = · · · =

Shape(P (xm)) = Shape(P (y)). On the other hand if x ∼ (P,Q) and y ∼ (P ′, Q′) and x and y havethe same shape then x ∼

L(P ′, Q) ∼

Ry and so x ∼

LRy.

The following result shows that the elementary Knuth transformations realise the change of labelmap:

Corollary 5.4.3. Assume that x ∼ (P,Q) and y ∼ (P ′, Q) and that R is an arbitrary standardtableau of the same shape as Q. Then there exists a sequence i1, i2, . . . im and well-defined ele-ments KimKim−1 . . .Ki1(x) and KimKim−1 . . .Ki1(y) satisfying KimKim−1 . . .Ki1(x) ∼ (P,R) andKimKim−1 . . .Ki1(y) ∼ (P ′, R).

Proof. Define x by x ∼ (P,R). Then x and x have the same P -symbol and hence are Knuth equiva-lent. Hence, by Corollary 5.2.4, there exists a sequence i1, i2, . . . , im such that Kik−1

Kik−2. . .Ki1(x) ∈

Dik for all k and x = KimKim−1 . . .Ki1(x). Now, x and y have the same Q-symbol and hence arein the same left cell. Hence R(x) = R(y) and so y ∈ Di1 and, by Corollary 5.3.2, Ki1(x) ∼

L

Ki1(y). Now, since Ki1(x) and Ki1(y) lie in the same left cell we have, by Proposition 4.5.3,that R(Ki1(x)) = R(Ki1(y)). Hence Ki1(y) ∈ Di2 and we have a well defined Ki2Ki1(y) sat-isfying Ki2Ki1(x) ∼

LKi2Ki1(y). Repeating this argument we see that we have a well-defined

y = KimKim−1 . . .Ki1(y) and x ∼L

y. Now P (y) = P ′ since Knuth equivalent permutations have

the same P -symbol. Also, x ∼L

y implies (from the theorem above) that R = Q(x) = Q(y). Hence

x ∼ (P,R) and y ∼ (P ′, R).

58

Page 71: álgebras de hecke

The following can be used to show that representations afforded by left cells corresponding toQ-symbols of the same shape are isomorphic:

Corollary 5.4.4. Suppose that P , P ′, Q and R are standard tableaux of the same shape and thatx ∼ (P,Q), y ∼ (P ′, Q), x ∼ (P,R) and y ∼ (P ′, R). Then µ(x, y) = µ(x, y).

Proof. By the above Corollary we can find i1, i2, . . . , im such that KimKim−1 . . .Ki1(x) = x andKimKim−1 . . .Ki1(y) = y. By repeated application of Proposition 5.3.1 we have:

µ(x, y) = µ(KimKim−1 . . .Ki1(x),KimKim−1 . . .Ki1(y))= µ(Kim−1Kim−2 . . .Ki1(x),Kim−1Kim−2 . . .Ki1(y))...= µ(x, y)

5.5 Property A

In the previous section we obtained a complete description, in terms of P and Q-symbols, for theleft, right and two-sided cells. Given this characterisation we might suspect that our cellular basishas the form Cw = Cλ

P,Q where w ∼ (P,Q) and λ is the shape of P . For this basis to be cellular,axiom (C2) states that, in left multiplying Cλ

P,Q by arbitrary a ∈ Hn(q) the only elements in thesame two-sided cell which appear with non-zero coefficient are of the form Cλ

P ′,Q with P ′ anotherstandard tableau of shape λ. In other words, in left multiplying Cw by a ∈ Hn(q) the only elementsin the same two-sided cell as w which appear are in the same left cell as w.

If Cx appears with non-zero coefficient in aCw for some a ∈ H then (recalling our original definitionof the cell preorders given in Section 4.1) we have x ≤

L

w. Hence we must show that if x and w are

in the same two-sided cell and x ≤L

w then x and w are in the same left cell. Unfortunately, this

statement is equivalent to a deep result of Kazhdan-Lusztig theory known as “Property A”. Thisis proved by Kazhdan and Lusztig using intersection cohomology in [18].3 We will be content tooffer a proof in a special case and refer the courageous reader to Kazhdan-Lusztig.

Property A. Suppose that x, y ∈ Symn satisfy x—y, L(x) * L(y) and R(x) * R(y). Then x andy do not lie in the same two-sided cell.

Proof in a special case: Assume that y ∼ (P, Sλ) where λ is the shape of P and Sλ is the columnsuperstandard tableau of shape λ. Then, since x—y and L(x) * L(y) we have x ≤

L

y. Hence

R(x) ⊃ R(y) (Proposition 4.5.3). If P (x) and P (y) have the same shape then Lemma 2.7.2 forcesQ(x) = Sλ. This contradicts the fact that R(x) * R(y) (since si ∈ R(x) if and only if i ∈ D(Q(x))by Proposition 2.7.1). Hence P (x) and P (y) do not have the same shape and so lie in differenttwo-sided cells.

We use this to prove:

Lemma 5.5.1. Suppose x ≤L

y and P (x) and P (y) have the same shape. Then R(x) = R(y).

3Actually, Kazhdan and Lusztig prove that, in the Hecke algebra of a Weyl group (of which the symmetric groupis an example), the coefficients of the Kazhdan-Lusztig polynomials are all positive. See the notes to this chapter.

59

Page 72: álgebras de hecke

Proof. Since x ≤L

y there exists a chain x = x1—x2— . . .—xm = y such that L(xi) * L(xi+1) for

all 1 ≤ i < m. In particular x ≤L

xi ≤L

y for all i. By assumption x and y have the same shape and

so x ∼LR

y. Hence for all i we have x ≤LR

xi ≤LR

y ≤LR

x and so all of the xi lie in the same two-sided

cell.

Now, assume for contradiction that R(xi) 6= R(xi+1) for some i. Since xi ≤L

xi+1 and so R(xi) ⊃

R(xi+1). Hence, if R(xi) 6= R(xi+1) then R(xi) * R(xi+1). But then we have xi—xi+1, L(xi) *L(xi+1) and R(xi) * R(xi+1) and we can apply Property A above to conclude that xi and xi+1 donot lie in the same two sided cell. This is a contradiction. Hence R(xi) = R(xi+1) for all 1 ≤ i < mand so R(x) = R(y).

We can now prove that the situation described at the start of this section cannot occur:

Proposition 5.5.2. Let x, y ∈ Symn.(i) If x ≤

L

y and x ∼LR

y then x ∼L

y.

(ii) If x ≤R

y and x ∼LR

y then x ∼R

y.

Proof. For (i) assume that x ∼ (P,Q) and y ∼ (P ′, Q′). Let λ by the shape of y and define yby y ∼ (P ′, Sλ). Since y and y have the same P -symbol there exists a sequence i1, i2, . . . im suchthat Kik−1

Kik−2. . .Ki1(y) ∈ Dik for all k and y = KimKim−1 . . .Ki1(y) by Corollary 5.2.4. We

have x ≤L

y and x ∼LR

y and so R(x) = R(y) by Lemma 5.5.1 above. Hence x ∈ Di1 if and only

if y ∈ Di1 . Thus, we can apply Corollary 5.3.2 to get that Ki1(x) ≤L

Ki1(y). Now, from above

R(Ki1(x)) = R(Ki1(y)) and so Ki1(x) ∈ Di2 . Hence, Ki2Ki1(x) ≤L

Ki2Ki1(y). Continuing in this

fashion we see that we get a well defined x by defining x = KimKim−1 . . .Ki1(x) and we have x ≤L

y.

Thus R(x) ⊃ R(y) and so D(Q(x)) ⊃ D(Q(y)) = D(Sλ) (Proposition 2.7.1). But, by assumption xand y have the same shape. Hence x and y have the same shape and Lemma 2.7.2 forces Q(x) = Sλ.Hence, by Corollary 5.4.3, we see that Ki1Ki2 . . .Kim(x) and Ki1Ki2 . . .Kim(y) have the same Q-symbol. But Ki is an involution and hence Ki1Ki2 . . .Kim(x) = x and Ki1Ki2 . . .Kim(y) = y. Thusx and y have the same Q-symbol and so x ∼

Ly (Proposition 5.2.6).

For (ii), if x ≤R

y and x ∼LR

y then x−1 ≤L

y−1 and x−1 ∼LR

y−1 by Corollary 4.5.2. By (i) we have

x−1 ∼L

y−1 implying x ∼R

y (again by Corollary 4.5.2).

5.6 The Main Theorem

Let Λ denote the set of partitions of n. If λ ∈ Λ write M(λ) for the set of standard tableau of shapeλ. If x ∈ Symn and x ∼ (P,Q) under the Robinson-Schensted correspondence write Cλ

P,Q = Cx

where λ is the shape of P . If λ, µ ∈ Λ write λ ≤ µ if there exists x, y ∈ Symn such that x ≤LR

y with

Shape(P (x)) = λ and Shape(P (y)) = µ. Since ≤LR

is a preorder ≤ is a preorder. Now if λ ≤ µ ≤ λ

then there exists x and y such that Shape(P (x)) = λ and Shape(P (y)) = µ with x ≤LR

y ≤LR

x.

Hence x ∼LR

y and so λ = µ by Corollary 5.4.2. Hence ≤ is a partial order on partitions. Using thisnew notation we can show that the Kazhdan-Lusztig basis is cellular.

60

Page 73: álgebras de hecke

As in the definition of a cellular algebra define H(< λ) and H(≤ λ) to be the A-span of thosebasis elements Cµ

U,V with U, V ∈ M(µ) satisfying µ < λ and µ ≤ λ respectively. We start byreformulating the multiplication formulae of Section 4.4 in the quotient module H(≤ λ)/H(< λ):

Lemma 5.6.1. The action of Ti on CλP,Q in H(≤ λ)/H(< λ) is given by:

TiCλP,Q ≡

{−Cλ

P,Q if i ∈ D(P )qCλ

P,Q + q12

∑P ′∈M(λ)i∈D(P ′)

µ(P ′, P )CλP ′,Q if i /∈ D(P )

(mod H(< λ))

Where µ(P ′, P ) ∈ A is independent of Q.

Proof. First note that H(≤ λ) and H(< λ) are certainly ideals of Hn(q) because if w ∈ Symn issuch that P (w) has shape λ then H(≤ λ) = H(≤

LR

w) and H(< λ) = H(<LR

w) by the way that we

have defined ≤. Now, recall the multiplication formula given in Section 4.4:

TsiCw =

{−Cw if si ∈ L(w)

qCw + q12

∑z—w

r∈L(z)

µ(z, w)Cz if si /∈ L(w)(5.6.1)

We want to reduce (5.6.1) modulo H(<LR

w). If Cz appears with non-zero coefficient we have z ≤L

w

(recalling the original definition of the cell preorders given in Section 4.1). If Cz /∈ H(<LR

w) then

z ∼LR

w. Hence, by Proposition 5.5.2, the only Cz not in H(<LR

w) which emerge with non-zero

coefficient in (5.6.1) satisfy z ∼L

w. Also, since we have defined µ(z, w) = 0 if the relation z—w

does not hold, we can omit the requirement that z—w:

TsiCw ≡

{−Cw if si ∈ L(w)

qCw + q12

∑z

si∈L(w)z∼

Lw

µ(z, w)Cz if si /∈ L(w)(mod H(<

LRw)) (5.6.2)

We want to reinterpret (5.6.2) in terms of our new notation for the basis. Fix w ∼ (P,Q). Then,summing over µ(z, w)Cz with z ∼

Lw is equivalent to summing over µ(z, w)Cλ

P ′,Q with P ′ ∈ M(λ)

by Proposition 5.4.1. Also, if si ∈ L(w) then i ∈ D(P ) by Lemma 2.7.1. Lastly, by the remarks atthe start of the proof H(<

LRw) = H(< λ). Hence (5.6.2) becomes:

TiCλP,Q ≡

{−Cλ

P,Q if i ∈ D(P )qCλ

P,Q + q12

∑P ′∈M(λ)z∼(P ′,Q)i∈D(P ′)

µ(z, w)CλP ′,Q if i /∈ D(P )

(mod H(< λ)) (5.6.3)

Now, by Corollary 5.4.4, if R is another standard tableau of shape λ and if w ∼ (P,R) andz ∼ (P ′, R) then µ(z, x) = µ(z, w). Hence, if we define µ(P ′, P ) = µ(z, w) we obtain a well-definedinteger independent of Q. Thus, (5.6.3) becomes:

TiCλP,Q ≡

{−Cλ

P,Q if i ∈ D(P )qCλ

P,Q + q12

∑P ′∈M(λ)i∈D(P ′)

µ(P ′, P )CλP ′,Q if i /∈ D(P )

(mod H(< λ))

We can now prove:

61

Page 74: álgebras de hecke

Theorem 5.6.2. The Kazhdan-Lusztig basis {CλP,Q |λ ∈ Λ, P, Q ∈ M(λ)} is a cellular basis for

Hn(q).

Proof. The A-linear map given by CλP,Q 7→ Cλ

Q,P sends Cw to Cw−1 by the Symmetry Theorem(Theorem 2.5.2) and hence is the map ∗. We have seen in Section 3.4 that ∗ is an anti-involutionand hence (C1) is satisfied.

Define ri(P ′, P ) by:

ri(P ′, P ) =

−1 if i ∈ D(P ) and P ′ = P

0 if i ∈ D(P ) and P ′ 6= P

q if i /∈ D(P ) and P ′ = P

q12 µ(P ′, P ) if i /∈ D(P ) and P ′ 6= P

Lemma 5.6.1 shows that, for all i we have:

TiCλP,Q ≡

∑P ′∈M(λ)

ri(P ′, P )CλP ′,P (mod H(< λ)) (5.6.4)

Now Hn(q) is generated by {Ti |1 ≤ i < n} and hence (5.6.4) shows that, for all a ∈ Hn(q), we canfind ra(P ′, P ), independent of Q, such that:

aCλP,Q ≡

∑P ′∈M(λ)

ra(P ′, P )CλP ′,P (mod H(< λ))

Hence (C2) is satisfied and so {CλP,Q |λ ∈ Λ, P,Q ∈M(λ)} is a cellular basis.

5.7 Notes

1. The subgroup 〈si, si+1〉 is known as a parabolic subgroup. More generally, in a Coxeter groupa (standard) parabolic subgroup is any subgroup generated by a subset of simple reflections.See, for example, Bourbaki [3] or Humphreys [15].

2. The functions Ki on the subsets Di are the ‘right star operations’ of Kazhdan-Lusztig [17].In Kazhdan-Lusztig Di is denoted by DR(si, si+1) and Ki(x) is denoted by x∗. We have notintroduced the ‘left star operations’ (denoted ∗x) or DL(si, si+1). These correspond to the‘dual Knuth transformations’. See, for example, Stanley [30] for the definition of dual Knuthtransformations.

3. The idea of viewing an element of x ∈ Di as embedded in the coset x〈si, si+1〉 was suggestedby Shi [28].

4. Corollary 5.4.3 is a purely combinatorial result which we prove using the connection betweenQ-symbols and left cells. For a combinatorial proof see Schutzenberger [27].

5. As we mentioned in the text, the proof of Proposition 5.3.1 follows the original argument ofKazhdan-Lusztig [17] closely.

6. The proof that x ∼L

y if and only if Q(x) = Q(y) is based on Ariki [1] who, in turn, cites

Garsian-McLarnan [11].

62

Page 75: álgebras de hecke

7. In [18], Kazhdan and Lusztig prove that, in the Hecke algebra of a Weyl group (of which thesymmetric group is an example), the coefficients of the Kazhdan-Lusztig polynomials are allnon-negative integers. In fact this implies Property A. For a proof of this implication seeDyer [4].

63

Page 76: álgebras de hecke

64

Page 77: álgebras de hecke

A Appendix

A.1 An Alternative Proof of Kazhdan and Lusztig’s Basis Theorem

The proof of Theorem 3.5.1 given in Chapter 3 follows the original proof of Kazhdan and Lusztig[17] closely. We gave it in its entirety for two reasons: it reveals the intricacy of the Kazhdan-Lusztig basis and yields explicit inductive information. In this appendix we present a briefer prooforiginally due to Lusztig [19] and presented elegantly in Soergel [29].

It is convenient to renormalise, and define a new basis of Hn(q) by Tw = q− 1

2w Tw. A simple

calculation shows that the multiplication in (3.1.2) becomes:

TwTr =

{Twr if wr > w

Twr + (q12 − q−

12 )Tw if wr < w

(A.1.1)

Under the new basis the identity in (3.3.1) becomes:

T−1r = Tr + (q−

12 − q

12 )Tid (A.1.2)

Also, making use of Lemma 3.3.2 we have:

ι(Tw) = ι(q− 1

2w Tw) = q

12w(∑y≤w

Ry,wTy) = q12wRw,wTw +

∑y<w

Ry,wTy

Using the fact that Rw,w = q−`(w) (Proposition 3.3.2) and the definition of Ty yields:

ι(Tw) = Tw +∑y<w

q12y Ry,wTw (A.1.3)

Now, in terms of this new basis Theorem 3.5.1 states that, for all w ∈ Symn there is a unique element

Cw =∑

y≤w εyεwq12wq

− 12

y P y,wTw with Px,w ∈ Z[q], Pw,w = 1 and deg Px,w ≤ 12(`(w) − `(x) − 1) for

x < w. Now if Px,w has degree at most 12(`(w)− `(x)− 1) then q

12wq

− 12

x P x,w = q12(`(w)−`(x))P x,w is a

polynomial in Z[q12 ] without constant term. In other words, if x < w then q

12wq

− 12

x P x,w ∈ q12 Z[q

12 ].

Hence, Theorem 3.5.1 is equivalent to:4

Theorem 3.5.1 (Restatement). For all w ∈ Symn there exists a unique element Cw such thatι(Cw) = Cw and Cw ∈ Tw +

∑y<w q

12 Z[q

12 ]Ty.

Proof. We first show uniqueness. So assume that Cw = Tw+∑

y<w hyTy and C ′w = Tw+

∑y<w h′yTy

both satisfy the conditions of the theorem. Then Cw − C ′w is also ι-invariant since ι is a homo-

morphism. If Cw 6= C ′w then there exists x maximal (with respect to the Bruhat order) so that

hx 6= h′x. Then Cw − C ′w = ι(Cw − C ′

w) implies:∑y<w

(hy − h′y)Ty =∑y<w

ι(hy − h′y)ι(Ty)

4It is actually not true that the two statements are entirely equivalent: if x < w the original theorem states

that Px,w is a polynomial in q whereas the restatement only guarantees that Px,w is a polynomial in q12 . Once the

restatement is proved it is straightforward to prove inductively that Px,w indeed lies in Z[q].

65

Page 78: álgebras de hecke

Since x is maximal we see from (A.1.3) that the coefficient of Tx on the right hand side is ι(hx−h′x).Equating coefficients yields hx − h′x = ι(hx − h′x) which is impossible since hx − h′x ∈ q

12 Z[q

12 ] and

so ι(hx − h′x) ∈ q−12 Z[q−

12 ]. Hence Cw = C ′

w and there is at most one choice of the Cw satisfyingthe conditions of the theorem.

We now show the existence of the Cw. Clearly ι(Tid) = Tid and so setting Cid = Tid satisfiesthe conditions of the theorem. Now if r ∈ S then ι(Tr) = Tr + (q

12 − q−

12 )Tid by (A.1.2) and so

ι(Tr − q12 Tid) = Tr + (q−

12 − q

12 )Tid − q−

12 Tid = Tr − q

12 Tid. Hence we have Cr = Tr − q

12 Tid for all

simple transpositions r ∈ S. A simple calculation yields the following multiplication formula forCr:

TwCr =

{Twr − q

12 Tw if wr > w

Twr − q−12 Tw if wr < w

(A.1.4)

Now, for the inductive step assume that the Cy are known and satisfy the conditions of the theoremfor all y < w. Choose r ∈ S such that wr < w (so that Cwr is known). Then by assumptionCwr = Twr +

∑y<wr h′yTy for some h′y ∈ q

12 Z[q

12 ]. Now:

CwrCr = TwrCr +∑

y<wr

h′yTyCr

Since wr2 = w > wr we have TwrCr = Tw − q12 Twr by (A.1.4). Also by (A.1.4), TyCr is either

equal to Tyr + q12 Ty or Tyr − q−

12 Ty. In either case the coefficients in h′yTyCr are in Z[q

12 ] since

h′y ∈ q12 Z[q

12 ]. Also, since y < wr then by Lemma 1.3.1 either yr < w or yr < wr (and so yr < w

since wr < w). Hence we may write:

CwrCr = Tw +∑y<w

hyTy (A.1.5)

for some hy ∈ Z[q12 ]. Now CwrCr is certainly ι-invariant (again since ι is a homomorphism and

Cwr and Cr are ι-invariant). However, the hy are not necessarily without constant term. Notice,however, that for y < w the coefficient of Ty in hyTy − hy(0)Cy is hy − hy(0) which is certainlywithout constant term. And, by our inductive assumptions on Cy, the coefficient of Tx for x ≤ y isin q

12 Z[q

12 ]. Therefore, we may form:

Cw = Tw +∑y<w

(hyTy − hy(0)Cy) (A.1.6)

Now, from the observations above Cw ∈ Tw +∑

y<w q12 Z[q

12 ]Ty. Note also that Cw = CwrCr −∑

y<w hy(0)Cy and hence ι(Cw) = Cw. Hence the existence of Cw is proven.

66

Page 79: álgebras de hecke

A.2 Two-Sided Cells and the Dominance Order

In Section 5.4 we saw that the partitions of n index the two-sided cells of Symn. In order toshow that the Kazhdan-Lusztig basis is cellular we needed to define a partial order on partitionscompatible with the two-sided cell order. We did this in the obvious way: we defined λ ≤ µ if thereexists an x and y in Symn such that Shape(P (x)) = λ, Shape(P (y)) = µ and x is less than orequal to y in the two-sided cell preorder.

From a combinatorial point of view it would be desirable to have an easily described partial orderon partitions compatible with the two-sided cell order. It is part of the folklore of this subject thatthe dominance order provides such an order. That is, that x ≤

LR

y if and only if Shape(P (x)) E

Shape(P (y)). However, we are only able to prove one implication:

Proposition A.2.1. Suppose that x and y are in Symn with Shape(P (x)) E Shape(P (y)). Thenx ≤

LR

y.

The difficulty in the proof is more notational than conceptual and so we illustrate the method withan example before giving the proof. Consider the partitions λ = and µ = of 6. The

partition µ is obtained from λ by performing a raising operation from the outside corner (3,1) tothe outside corner (2,3). We want to show that there exists x and y in Symn such that x ≤

LR

y and

Shape(P (x)) = λ and Shape(P (y)) = µ.

We start with the permutation w(λ) = 321546. Clearly w(λ) ∼ (Sλ, Sλ). Now we define w byreplacing the largest number in the column in which we wish to move a box from (column 1) bythe largest entry in the column in which we wish to move a box to (column 3). To make sure thatwe still have a permutation we must decrement all the numbers in between the two columns by 1.In our example, we want to move a box from the first row to the third and so we replace the 3 witha 6 and then decrement the second two rows. Thus we obtain w = 621435. It is easily verified thatthis procedure does not change the shape and so Shape(P (w)) = λ.

We then form a chain of elements:

x1 = x = 621435 R(x1) = {s1, s4}x2 = x1s1 = 261435 R(x2) = {s1, s2, s4}x3 = x2s2 = 216435 R(x3) = {s1, s3, s4}x4 = x3s3 = 214635 R(x3) = {s1, s4}x5 = x4s4 = 214365 R(x4) = {s1, s3, s5}

(Note that each permutation is obtained from the previous one by moving the 6 one place to theright.) For 1 < i ≤ 4 we have xi−1—xi by Lemma 4.3.2(i). We also have si ∈ R(xi) for all i butsi /∈ R(xi+1) and hence R(xi) * R(xi+1) for all 1 ≤ i < 5. Hence x = x1 ≤

R

x5. Now, P (x5) = 1 3 52 4 6

and hence if we set y = x5 we have x ≤R

y with the shape of P (y) obtained from the shape of P (x)

by performing a raising operation.

Proof (Sketch): Suppose that µ is obtained from λ by performing a raising operation from theoutside corner at the end of the jth column to the inside corner at the end of the kth column. Let

67

Page 80: álgebras de hecke

c1, c2, . . . cm be the column lengths of λ. Define c′i =∑

j≤i cj and let wλ be the permutation:

wλ =(

1 2 . . . c′1 c′1 + 1 c′1 + 2 . . . c′2 . . . c′m−1 + 1 . . . c′mc′1 c′1 − 1 . . . 1 c′2 c′2 − 1 . . . c′1 + 1 . . . c′m . . . c′m−1 + 1

)It is straightforward to verify that wλ ∼ (Sλ, Sλ).

Now, define a new permutation w by:

w(i) =

wλ(i) if i < c′j−1 + 1 or i > c′kc′k if i = c′j−1 + 1wλ(i)− 1 if c′j−1 + 1 < i ≤ c′k

Then w is a permutation with the same shape as wλ. Let p = c′j−1 + 1 and q = c′k−1 − 1. Define afamily of elements by:

wp = w

wp+1 = wpsp

wp+2 = wp+1sp+1

...wq = wq−1sq−1

Now, for all p ≤ i ≤ q we have wi(i) = c′k which is the greater than or equal to wi(l) for allp ≤ i, l ≤ q. Hence si ∈ R(wi) but si /∈ R(wi+1) for all p ≤ i < q. Also, wi—wi+1 for all p ≤ i < qby Lemma 4.3.2(i). Hence w ≤

R

wq. It can be verified that wq = wµ and so we have w ≤R

wµ with

Shape(w) = λ and Shape(wµ) = µ.

Now, if λ E µ we have seen in Lemma 2.8.1, that there exists a chain λ = λ1, λ2, . . . λm = µ inwhich each λi+1 is obtained from λi by performing a raising operation. Now, from above, for eachi we can find a sequence of elements xi ≤

R

yi with Shape(xi) = λi and Shape(yi) = λi+1. Since yi

and xi+1 have the same shape we have yi ∼LR

xi+1 for all 1 ≤ i < m (Corollary 5.4.2). We thus havea chain:

x1 ≤R

y1 ∼LR

x2 ≤R

y2 ∼LR

x3 ≤R

. . . ≤R

ym−1 ∼LR

xm ≤R

ym

In which Shape(P (x1)) = λ and Shape(P (ym)) = µ. Now, if x and y are any permutationssatisfying Shape(P (x)) = λ and Shape(P (y)) = µ then x ∼

Lx1 ≤

LR

ym ∼LR

y and hence x ≤LR

y.

68

Page 81: álgebras de hecke

References

[1] S. Ariki, Robinson-Schensted correspondence and left cells, Combinatorial methods in repre-sentation theory (Kyoto, 1998), 1–20, Adv. Stud. Pure Math., 28, Kinokuniya, Tokyo, 2000.

[2] H. Barcelo and A. Ram, Combinatorial Representation Theory, in Math. Sci. Res. Inst. Publ.,38 (1999), Cambridge University Press, Cambridge, 23–90.

[3] N. Bourbaki, Groupes et algebres de Lie, Chs. IV, V et VI, Hermann, Paris, 1968.

[4] M. J. Dyer, W -Graphs, Gyoja’s Theorem and Lusztig’s Isomorphism Theorem, M.Sc. thesis,University of Sydney, 1985.

[5] M. J. Dyer, Hecke Algebras and Reflections in Coxeter Groups, Ph.D. thesis, University ofSydney, 1987.

[6] M. J. Dyer, and G. I. Lehrer, On Positivity in Hecke Algebras, Geom. Ded. 35, (1990), 115–125.

[7] J. Du, B. Parshall, and J. P. Wang, GLn: Quantum, Rational and Discreet, in preparation.

[8] S. Fishel and I. Grojnowski, Canonical Bases for the Brauer Centralizer Algebra, Math. Re-search. Let. 2 (1995), 12–26

[9] S. Fomin, Knuth Equivalence, Jeu de Taquin, and the Littlewood-Richardson Rule, Chapter 7:Appendix 1 in R. P. Stanley, Enumerative Combinatorics, Vol. 2, Cambridge University Press,Cambridge, 1999.

[10] W. Fulton, Young Tableaux: With Applications to Representation Theory and Geometry, Lon-don Mathematical Society student texts, 35 (1997), Cambridge University Press.

[11] A. M. Garsia and T. J. McLarnan, Relations between Young’s Natural and the Kazhdan-LusztigRepresentations of Sn, Adv. Math. 69 (1988), 32–92.

[12] J. J. Graham and G. I. Lehrer, Cellular Algebras, Invent. Math. 123 (1996), 1–34.

[13] J. A. Green On the Steinberg characters of finite Chevalley Groups, Math. Z. 117 (1970),272–288.

[14] A. Gyoja and K. Uno, On the semisimplicity of Hecke algebras, J. Math. Soc. Japan 41 (1989),no.1, 75–79.

[15] J. Humphreys, Reflection Groups and Coxeter Groups, Cambridge Studies in Advanced Math-ematics, 29 (1990), Cambridge University Press.

[16] N. Iwahori, On the structure of the Hecke ring of Chevalley group over a finite field, J. Fac.Sci. Univ. Tokyo Sect. 1A. Math. 10 (part 2) (1964), 215–236.

[17] D. Kazhdan and G. Lusztig, Representations of Coxeter Groups and Hecke Algebras, Invent.Math. 53 (1979), 165–184.

[18] D. Kazhdan and G. Lusztig, Schubert varieties and Poincare duality, Proc. Sym. Pure. Math.A.M.S. 36 (1980) 185–203 .

69

Page 82: álgebras de hecke

[19] D. Knuth, Permutations, matrices, and generalized Young tableaux, Pacific Journal of Math.34 (1970), 709–727.

[20] D. Knuth, The Art of Computer Programming, Addison–Wesley, Reading MA, 1975.

[21] G. Lusztig, Left Cells in Weyl Groups, in Lie Group Representations I (R. L. R. Herb and J.Rosenberg, eds.), Lecture Notes in Math., 1024, Springer-Verlag (1983), 99–111.

[22] G. Lusztig, On a theorem of Benson and Curtis, J. Algebra 71 (1981), 490–498.

[23] I. G. MacDonald, Symmetric Functions and Hall Polynomials, Oxford University Press, Ox-ford, 1995.

[24] A. Mathas, The Iwahori-Hecke algebras and Schur algebras of the symmetric group, UniversityLecture Series, 15 (1999), American Mathematical Society.

[25] G. de. B. Robinson, On the representations of the symmetric groups, American Journal ofMath. 60 (1938), 745–760.

[26] C. Schensted, Longest increasing and decreasing sequences, Canadian Journal of Math. 13(1961), 179–191.

[27] M.-P. Schutzenberger, La Correspondence de Robinson, Lecture Notes in Math. 579 (1997),Springer-Verlag.

[28] J.-Y. Shi, The Kazhdan-Lusztig Cells in Certain Affine Weyl Groups, Lecture Notes in Math.1179 (1986), Springer-Verlag.

[29] W. Soergel, Kazhdan-Lusztig Polynomials and a Combinatoric for Tilting Modules, Represen-tation Theory, 1 (1997), 83–114.

[30] R. P. Stanley, Enumerative Combinatorics, Vol. 2, Cambridge Studies in Advance Mathemat-ics, 62 (1999), Cambridge University Press.

[31] R. Steinberg, Lectures on Chevalley groups, Yale Lecture Notes, Yale University, 1967.

70