51
M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, 2017 1 Administrivia Comments, complaints, and corrections: E-mail me at [email protected]. The course website is wwwf.imperial.ac.uk/ ~ rbellovi/teaching/m3p12.html; there will be no blackboard page. Office hours: Thursdays 4:30-5:30, starting January 19. Problem sessions: Problem sessions will be on the Tuesday session of odd-numbered weeks, starting on January 24. Other reading: Previous versions of this course have been taught by James Newton (https://nms.kcl. ac.uk/james.newton/M3P12.html), Ed Segal (http://wwwf.imperial.ac.uk/ ~ epsegal/ repthy.html), and Matthew Towers (https://sites.google.com/site/matthewtowers/ m3p12). Lecture notes and sample problem sheets are available from their website; we will cover similar material, though somewhat rearranged. Recommended reference books include: G. James and M. Liebeck, Representations and Characters of Groups. J.-P. Serre, Linear Representations of Finite Groups. J.L. Alperin, Local Representation Theory. 1

M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Embed Size (px)

Citation preview

Page 1: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

M3/4/5P12 Group Representation Theory

Rebecca Bellovin

April 10, 2017

1 Administrivia

Comments, complaints, and corrections:

E-mail me at [email protected]. The course website is wwwf.imperial.ac.uk/

~rbellovi/teaching/m3p12.html; there will be no blackboard page.

Office hours:

Thursdays 4:30-5:30, starting January 19.

Problem sessions:

Problem sessions will be on the Tuesday session of odd-numbered weeks, starting on January24.

Other reading:

Previous versions of this course have been taught by James Newton (https://nms.kcl.ac.uk/james.newton/M3P12.html), Ed Segal (http://wwwf.imperial.ac.uk/~epsegal/repthy.html), and Matthew Towers (https://sites.google.com/site/matthewtowers/m3p12). Lecture notes and sample problem sheets are available from their website; we willcover similar material, though somewhat rearranged.

Recommended reference books include:

• G. James and M. Liebeck, Representations and Characters of Groups.

• J.-P. Serre, Linear Representations of Finite Groups.

• J.L. Alperin, Local Representation Theory.

1

Page 2: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

2 Introduction to representations

2.1 Motivation

Our ultimate goal is to study groups (and in this course, we will only study finite groups).Recall:

Definition 2.1. A group G is a set equipped with an associative multiplication map G×G→G such that there is an identity element e (i.e., e·g = g ·e = g for all g ∈ G) and every elementhas an inverse (i.e. for every g ∈ G, there is some g−1 ∈ G so that g · g−1 = g−1 · g = e).

Groups generally show up as symmetries of objects we are interested in, but it is difficultto study them directly. For example, it is much more fruitful to view D8 as the symmetrygroup of the square than to write down its multiplication table.

Theorem 2.2. Let G be a group with |G| = paqb for p, q prime numbers. Then G has aproper normal subgroup.

The easiest proof is via techniques from representation theory, but it is not obvious how toattack it by “bare hands”. The proof requires a bit of algebraic number theory, so we maynot be able to cover it, but the proof is in James and Liebeck’s book.

There are also applications to physics and chemistry: J.-P. Serre wrote his book because hiswife was a chemist.

The structure of the course will be:

1. Representations: definitions and basic structure theory

2. Character theory

3. Group algebras

Since we understand linear algebra much better than abstract group theory, we will attemptto turn groups into linear algebra.

Informally, a representation will be a way of writing elements of a group as matrices. Forexample, let G = C4 = e, g, g2, g3, with g4 = e. Consider also the 2× 2 matrices

I = ( 1 00 1 ) ,M = ( 0 −1

1 0 ) ,M2 =( −1 0

0 −1),M3 = ( 0 1

−1 0 )

Since M4 = I, this forms a finite cyclic group of order 4, and g 7→M defines an isomorphismwith C4. Thus, we have embedded C4 as a subgroup of GL2(C).

We further observe that M can be diagonalized: its characteristic polynomial is det(λI −M) = λ2 + 1, which has two distinct roots, ±i. The eigenspaces are generated by ( 1

−i )and ( 1

i ), so after changing basis, we see that our embedding is actually built out of twoembeddings C4 → C× given by g 7→ i, g 7→ −i.

2

Page 3: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

2.2 Definitions

2.2.1 First definitions

Recall from last time: We defined a group homomorphism C4 → GL2(C) by sending agenerator g 7→M = ( 0 −1

1 0 ). If P = ( i −1i 1 ), then M ′ = PMP−1 = ( i 00 −i ).

Definition 2.3 (Preliminary version). Let G be a group. A representation of G is a ho-momorphism ρ : G → GLd(C) (so ρ(gh) = ρ(g)ρ(h)). That is, for every g ∈ G we have amatrix ρ(g), and ρ(gh) = ρ(g)ρ(h) for all g, h ∈ G. We call d the dimension of ρ.

Two representations ρ, ρ′ : G→ GL(Cd) are equivalent if there is some matrix P ∈ GL(Cd)such that ρ′ = PρP−1.

Observe that our definition implies that if e ∈ G is the identity, ρ(e) = Id. For any g ∈ G,we further have that ρ(g−1) = ρ(g)−1.

Example 2.4. 1. Suppose that ρ(g) = Id for all g ∈ G. This is called the trivial repre-sentation.

2. Let d = |G| and let the elements of the standard basis correspond to elements ofG. That is, choose some ordering of the elements of G and let {eg1 , . . . , egd} be thestandard basis of Cd. Define a representation by setting ρ(g)(eh) = egh for all g, h ∈ G.This is called the regular representation.

3. Let G = Cn, the cyclic group of order n with generator g, and let ρ : G→ GL1(C) bedefined by sending g to some nth root of unity ζ. For example, we could set ζ = e2πi/n.But any choice of e2aπi/n works, as well. To be more concrete, if n = 4, we could sendg to i, −1, −i, or 1; the last case is the trivial 1-dimensional representation.

4. Let G = Sn, the group of permutations of n elements. Let {e1, . . . , en} be the standardbasis of Cn; we define a representation by setting ρ(g)(ei) = eg(i). This is called thepermutation representation.

5. More generally, let X be a set with a left-action of G; recall that this means there is an“action map” G×X → X (which we write (g, x) 7→ g ·x) such that (gh) ·x = g · (h ·x).Let d = |X| and let elements of the standard basis of Cd correspond to elements of X;write the basis {ex}x∈X . Then we define a representation ρ : G → GLd(C) by settingρ(g)(ex) = eg·x.

If X = G, this construction recovers the regular representation. If G = Sn andX = {1, . . . , n}, this recoves the permutation representation.

Note that we do not require a representation to be an injective homomorphism! In fact, forthe trivial representation, the kernel is all of G. For the 1-dimensional representations of Cnwe wrote down earlier, ρ is injective if and only if ζ is a primitive nth root of 1, that is,ζn = 1 and n is the smallest positive integer with this property. If ρ is injective, we say thatit is a faithful representation.

3

Page 4: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

2.2.2 Cleaner definitions

Recall from linear algebra that every finite dimensional vector space has a basis, but a basisis not unique. Moreover, if g ∈ G and ρ : G→ GLd(C) is a representation, we can view ρ(g)as an invertible linear transformation V → V , where V is a d-dimensional complex vectorspace. Thus, it is cleaner to rewrite our definitions as follows:

Definition 2.5. A representation ofG is a pair (V, ρ) where V is a finite-dimensional complexvector space, and ρ : G→ GL(V ) is a homomorphism.

Recall that GL(V ) := {f : V∼−→ V }.

If we pick a basis of V , we may write ρ in terms of matrices as before:

Choose a basis B = (b1, . . . , bd) of V . This choice of basis gives us an isomorphism Cd ∼−→ V ,and therefore an isomorphism GLd(C)

∼−→ GL(V ). We therefore have a representation ρB :G→ GLd(C), in the sense of the previous lecture.

If we need to distinguish the two notions of representations, we will refer to homomorphismsρ : G→ GLd(C) as matrix representations.

Lemma 2.6. Let ρ : G → GL(V ) be a representation, and let B = (b1, . . . , bd) and B′ =(b′1, . . . , b

′d) be bases of V . Then ρB and ρB′ are equivalent. Conversely, if ρ1, ρ2 : G →

GLd(C) are equivalent matrix representations, there is a representation (V, ρ) such that ρ1and ρ2 can be obtained by choosing bases B1 and B2 of V .

Proof. Let P denote the change-of-basis matrix taking B to B′, that is, Pbi = b′i. Then forevery g ∈ G, ρB′ = PρBP

−1, so ρB and ρB′ are equivalent.

For the converse, suppose that ρ1 and ρ2 are equivalent, and let P ∈ GLd(C) be a matrixsuch that ρ2(g) = Pρ1(g)P−1 for all g ∈ G. Let V be the underlying vector space of Cd.Then we can think of ρ1 as a representation ρ : G→ GL(V ) by forgetting about the standardbasis on Cd. If we let B1 denote the standard basis of Cd, ρ1(g) = ρB1 (because we forgotthe standard basis and then remembered it again).

Now let B2 denote the basis of Cd given by the columns of P−1. Then for every g ∈ G, thematrix of ρ(g) with respect to B2 is given by Pρ1(g)P−1, as desired.

2.3 Homomorphisms of representations

Let (V, ρV ) and (W, ρW ) be representations of G.

Definition 2.7. A linear map f : V → W is G-linear if f ◦ ρV (g) = ρW (g) ◦ f for everyg ∈ G.

Recall that a map f : V → W is linear if f(av1 + bv2) = af(v1) + bf(v2) for a, b ∈ C andv1, v2 ∈ V .

4

Page 5: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

In other words, for every element g ∈ G, we have a commutative square:

VρV (g) //

f��

V

f��

WρW (g) //W

Definition 2.8. Let (V, ρV ) and (W, ρW ) be representations of G. A map f : V → W is saidto be a homomorphism of representations if it is G-linear, and it is said to be an isomorphismof representations if it is G-linear and invertible.

Exercise 2.9. Check that if f is an isomorphism of representations, so is f−1.

Proposition 2.10. Let (V, ρV ) and (W, ρW ) be representations of G, and let BV and BWbe bases of V and W , respectively. If f : V → W is a linear map and [f ]BV ,BW denotesthe matrix representing f with respect to the chosen bases, then f is G-linear if and only if[f ]BV ,BW ρV (g) = ρW (g)[f ]BV ,BW for all g ∈ G.

Proof. This follows by unwinding the definition; trace through the commutative diagramabove, and write down the matrix associated to each map.

Corollary 2.11. Let (V, ρV ) and (W, ρW ) be d-dimensional representations of G, and let BVand BW be bases of V and W , respectively. Then (V, ρV ) and (W, ρW ) are isomorphic if andonly if ρV,BV and ρW,BW are equivalent matrix representations.

When we discussed the regular representation, there was some question about how to enumer-ate elements of G. It doesn’t matter, and the reason it doesn’t matter is because re-orderingthe elements of G is just changing the basis of the underlying vector space. So differentorderings for the elements of G give different matrix representations but isomorphic abstractrepresentations.

2.4 Direct sums and indecomposable representations

Let us begin with an example, namely the regular representation of C2 = {e, g}. This is a2-dimensional representation (Vreg, ρreg), and we choose the basis (be, bg); the action of C4 isgiven by ρreg(g)(be) = bg and ρreg(g)(bg) = be.

We also have two 1-dimensional representations, (V0, ρ0) and (V1, ρ1), given in coordinatesby ρi(g) = (−1)i ∈ GL1(C).

It turns out we can define homomorphisms (Vi, ρi)→ (Vreg, ρreg):

• Choose a basis element v0 ∈ V0, and define a linear transformation V0 → Vreg viav0 7→ be + bg. Since ρreg(g)(be + bg) = be + bg (exercise!), this map is C2-linear.

5

Page 6: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

• Choose a basis element v1 ∈ V1, and define a linear transformation V1 → Vreg viav1 7→ be − bg. Since ρreg(g)(be − bg) = bg − be (exercise!), this maps is C2-linear.

Thus, we have a linear transformation V0 ⊕ V1 → Vreg, and since 〈be + bg〉 ∩ 〈be − bg〉 = {0},it is an isomorphism.

We will see that we can define a representation (V0 ⊕ V1, ρ1 ⊕ ρ2) so that this is actually anisomorphism of representations.

Recall that if V,W are finite-dimensional complex vector spaces, their direct sum V ⊕Wconsists of ordered pairs (v, w) with v ∈ V,w ∈ W . Defining addition by (v1, w1)+(v2, w2) :=(v1 +v2, w1 +w2) and scaling by a · (v, w) := (a ·v, a ·w) for a ∈ C makes V ⊕W into anotherfinite-dimensional complex vector space.

If we are have finite-dimensional complex vector spaces V1, V2, we may define the direct sumV1 ⊕ V2. If we have linear transformations f1 : V1 → W and f2 : V2 → W , where W isanother finite-dimensional complex vector space, we may define a new linear transformationf1 ⊕ f2 : V1 ⊕ V2 → W via (f1 ⊕ f2)(v1, v2) := f1(v1) + f2(v2).

If we are given linear transformations TV : V → V and TW : W → W , we may also definea new linear transformation TV ⊕ TW : V ⊕W → V ⊕W by setting (TV ⊕ TW )(v, w) :=(TV (v), TW (w)). This permits us to define the direct some of two group representations:

Definition 2.12. Let (V, ρV ) and (W, ρW ) be representations of a finite group G. The directsum (V ⊕W, ρV ⊕ ρW ) is defined by setting (ρV ⊕ ρW )(g) := ρV (g)⊕ ρW (g) ∈ GL(V ⊕W ).

Suppose that BV = (v1, . . . , vdV ) and BW = (w1, . . . , wdW ) are bases for V and W , respec-tively, and let ρV,BV and ρW,BW be the associated matrix representations. Then V ⊕W isdV + dW -dimensional, and

((v1, 0), . . . , (vdV , 0), (0, w1), . . . , (0, wdW ))

is a basis. The matrix representation associated to ρV ⊕ ρW with respect to this basis isgiven by (

ρV,BV (g) 00 ρW,BW

)for each g ∈ G.

Proposition 2.13. The map V0 ⊕ V1 → Vreg we defined earlier is an isomorphism of repre-sentations of C2.

Proof. It is an isomorphism of vector spaces, so it is enough to prove that it is C2-linear.But this follows from the definition of ρ0 ⊕ ρ1.

We can also see this on the level of matrices: The matrix representation associated to ρ0⊕ρ1with respect to ((v0, 0), (0, v1) is given by

(ρ0 ⊕ ρ1)(e) =

(1 00 1

)(ρ0 ⊕ ρ1)(g) =

(1 00 −1

)6

Page 7: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

On the other hand, the matrix representation associated to ρreg with respect to the basis(be, bg) is given by

ρreg(e) =

(1 00 1

)ρreg(g) =

(0 11 0

)Since the underlying representations are isomorphic, their matrix representations are equiv-alent, so there is some P ∈ GL2(C) such that ( 1 0

0 −1 ) = P ( 0 11 0 )P−1. Taking P = ( 1 1

1 −1 )suffices.

Definition 2.14. A representation (V, ρV ) of G is decomposable if it is isomorphic (as arepresentation) to the direct sum of smaller representations, i.e., if there are non-zero rep-resentations (V1, ρ1) and (V2, ρ2) such that (V, ρ) ∼= (V1 ⊕ V2, ρ1 ⊕ ρ2). A representation isindecomposable if no such decomposition exists.

We also make the following observation: Suppose that (V, ρV ) and (W, ρW ) are representa-tions of G. We have a linear map V → V ⊕W which is injective, and it is G-linear. So V ⊕Wactually contains subspaces which are preserved by the linear transformations (ρV ⊕ ρW )(g).

2.5 Subrepresentations and Irreducible Representations

Before we define subrepresentations, recall the notion of a vector subspace:

Definition 2.15. If V is a finite-dimensional complex vector space, a subspace of V is asubset W ⊂ V which is itself a complex vector space.

In particular, W must be a group under addition (so 0 ∈ W and if w ∈ W , so is −w), andW is preserved under scaling (i.e., if w ∈ W and λ ∈ C, λw ∈ W .

For example, {0} ⊂ V is a subspace. As another (more interesting) example, suppose wehave a linear transformation f : V1 → V2. Then we can define the kernel ker(f) := {v ∈ V1 :f(v) = 1} and the image im(f) := {f(v) ∈ V2 : v ∈ V1}. The kernel is a vector subspace ofV1 and the image is a vector subspace of V2.

We can specify a subspace of V by giving a set of generators : Let {v1, . . . , vn} be a subsetof V . The span of {v1, . . . , vn}, or the subspace generated by {v1, . . . , vn}, is the set

W := 〈v1, . . . , vn〉 := {a1v1 + . . .+ anvn : ai ∈ C}

Then W is a subset of V and a vector space, so it is a subspace of V . Note that we do notassume that the vi are linearly independent! They generate W but need not form a basis ofW .

Let T : V → V be a linear transformation, and let W ⊂ V be a subspace. We say that Tstabilizes or preserves W if T (w) ∈ W for every w ∈ W . In other words, T carries W toitself. Thus, it also defines a linear transformation W → W , which we denote T |W . We callthis the restriction of T . If W is 1-dimensional, then T preserves W if and only if W consistsof eigenvectors for T , i.e., if there is some λ ∈ C such that T (w) = λw for every w ∈ W .

7

Page 8: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

We can describe this in terms of matrices: Suppose that BV := (w1, . . . , wdW , vdW+1, . . . , vdV )is a basis of V such that BW := (w1, . . . , wdW ) is a basis of W (exercise: check that we canalways find such a basis!). Let T : V → V be a linear transformation which preserves W .

Then the matrix [T ]BV is of the form(

[T |W ]BW ∗0 ∗

).

Now we can define subrepresentations:

Definition 2.16. Let (V, ρV ) be a representation of G. A subrepresentation is a vectorsubspace W ⊂ V such that ρV (g) : V → V preserves W for each g ∈ G.

Thus, (W, {ρV (g)|W}) is another representation of G (since ρV (g) is invertible on V , ρV (g)|Wis invertible on W ).

The zero subspace {0} ⊂ V is a subrepresentation of V , but not a very interesting one.

Suppose that (V1, ρ1) and (V2, ρ2) are representations of a finite group G, and suppose thatwe have a homomorphism of representations f : V1 → V2. Then we may again define itskernel and image:

ker(f) := {v ∈ V1 : f(v) = 0} im(f) := {f(v) ∈ V2 : v ∈ V1}

Then ker(f) is a subrepresentation of V1 and im(f) is a subrepresentation of V2. Since bothof these are vector subspaces, it is enough to prove that they are preserved by ρ1(g) andρ2(g), respectively. But that follows from G-linearity of f .

We considered representations of C2 earlier, and we defined a C2-linear homomorphism V0 →Vreg (where V0 is equipped with the trivial representation). This homomorphism is injective,and its image is the vector subspace 〈be + bg〉 ⊂ Vreg. Thus, this is a subrepresentation ofVreg.

More generally, if (V, ρV ) is a representation of a finite group G, a 1-dimensional subrepre-sentation of V is a 1-dimensional subspace W = 〈w〉 ⊂ V such that w is an eigenvector forevery ρV (g).

Definition 2.17. A representation (V, ρV ) of G is reducible if there is a non-zero propersubrepresentation W ⊂ V , i.e., if 0 6= W 6= V . We say that V is irreducible if there is nosuch subrepresentation.

Example 2.18. Any 1-dimensional representation is irreducible.

Lemma 2.19. If (V, ρV ) is an irreducible representation of a finite group G, then it isindecomposable.

Proof. Suppose that V is decomposable. Then V ∼= V1 ⊕ V2 with V1, V2 6= 0 and there arerepresentations ρi : G ⇒ GL(Vi) such that (V, ρV ) ∼= (V1 ⊕ V2, ρ1 ⊕ ρ2). But this impliesthat there are injective G-linear maps fi : Vi → V ∼= V1 ⊕ V2. Then im(f1) and im(f2) aresubrepresentations of V , and they are non-zero because V1 and V2 were assumed non-zero.But since dimV = dimV1 + dimV2, dimVi 6= 0 implies dimVi < dimV so Vi 6= V . Thus,im(f1) and im(f2) are non-zero proper subrepresentations of V .

8

Page 9: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

In terms of matrices, we think of being decomposable as being able to choose a basis so thatthe matrix representation will have blocks down the diagonal and zeros elsewhere. We thinkof being decomposable as being able to choose a basis so that the matrix representation isupper-triangular except for blocks on the diagonal. For example, if (V, ρV ) is a 3-dimensionalrepresentation with a 2-dimensional subrepresentation, we can find a basis of V so that

ρV (g) =( ∗ ∗ ∗∗ ∗ ∗0 0 ∗

)for every g ∈ G.

Example 2.20. We give an example of an irreducible 2-dimensional representation ofS3∼= D6. Embed an equilateral triangle into R2 with vertices at (1, 0), (−1/2,

√3/2), and

(−1/2,−√

3/2). Then “counterclockwise rotation” and “reflection over the x-axis” generateD3, so using the standard basis of R2 ⊂ C2, we get a homomorphism ρ : D6 → GL2(R) ⊂GL2(C). The matrix for “counterclockwise rotation” is

(−1/2 −

√3/2√

3/2 −1/2

), and the matrix for

“reflection over the x-axis” is ( 1 00 −1 ).

To see that this representation is irreducible, we need to check that these two matrices do nothave a common eigenvector. But the eigenspaces of ( 1 0

0 −1 ) are generated by ( 10 ) and ( 0

1 ), and

they have different eigenvalues. Since neither of these is an eigenvector of(

1/2 −√3/2

−√3/2 −1/2

),

we are done.

2.6 Maschke’s theorem

We can now start studying the structure of representations. Previously, we have defineddirect sums of representations and decomposable representations, and we have defined sub-representations and irreducible representations. We showed that irreducible representationsare indecomposable. The goal for today is to show the converse:

Theorem 2.21 (Maschke’s Theorem). If (V, ρV ) is a reducible representation of a finitegroup G, then it is decomposable.

The finiteness hypothesis on G is crucial here! If G = Z, we may define a representation

ρ : G→ GL2(C); ρ(n) :=

(1 n0 1

)Then the only non-zero proper subrepresentation is the one generated by 〈( 1

0 )〉. Thus, thisrepresentation is reducible but indecomposable.

We formalize this argument somewhat. Recall the following definition:

Definition 2.22. Let V be a finite-dimensional complex vector space, and let W ⊂ Vbe a subspace. A complement of W is a subspace W ′ ⊂ V such that the natural mapW ⊕W ′ → V , i.e., the map defined by (w,w′) 7→ w + w′, is an isomorphism.

9

Page 10: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Note that complements are not unique. In fact, any subspace W ′ ⊂ V with W ∩W ′ = {0}and dimW ′ = dimV − dimW is a complement of W .

We make a similar definition for representations:

Definition 2.23. Let (V, ρV ) be a representation of a finite group G and let W ⊂ V bea subrepresentation. A subrepresentation W ′ ⊂ V is complementary to W if the naturalmap W ⊕W ′ → V given by (w,w′) → w + w′ induces an isomorphism of representationsρW⊕ρW ′

∼−→ ρV . That is, W ′ is complementary as a subspace, and is also a subrepresentation.

Thus, we can rephrase Maschke’s theorem as the statement that if G is finite, every subrep-resentation has a complementary representation. Before we prove it, we state a corollary:

Corollary 2.24. Every finite-dimensional complex representation (V, ρV ) of a finite groupG is isomorphic to a direct sum

V ∼= V1 ⊕ V2 ⊕ · · · ⊕ Vr

where (Vi, ρi) is an irreducible representation of G.

Proof. This follows by induction on the dimension of V . If V is 1-dimensional, we havealready seen that V is irreducible. Now let V be an arbitrary representation of G and supposewe know the result for all representations of dimension less than dimV . Then V is eitheran irreducible representation, or V ∼= W ⊕W ′ for {0} 6= W,W ′ ( V (as representations) byTheorem 2.21. But dimW, dimW ′ < dimV so W and W ′ are isomorphic to direct sums ofirreducible representations. Therefore, V is also isomorphic to a direct sum of irreduciblerepresentations.

We will prove Maschke’s theorem by constructing complementary subrepresentations.

Lemma 2.25. Let V be a finite-dimensional complex vector space and let f : V → Vbe a linear transformation such that f ◦ f = f . Then ker(f) ⊂ V and im(f) ⊂ V arecomplementary subspaces.

Let (V, ρV ) be a representation of a finite group G and let f : V → V be a homomorphismof representations such that f ◦ f = f . Then ker(f) ⊂ V and im(f) are complementarysubrepresentations of V .

Proof. We have seen that ker(f) and im(f) are subspaces of V , and if V is a represen-tation and f is G-linear, that they are subrepresentations. By the rank-nullity theorem,dim ker(f) + dim im(f) = dimV , so it suffices to show that ker(f)∩ im(f) = {0}. So choosesome v ∈ ker(f) ∩ im(f), so that v = f(v′) and f(v) = 0. But we assumed f ◦ f = c, so

0 = f(v) = f(f(v′)) = f(v′) = v

Thus, v = 0, as desired.

10

Page 11: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Given a finite-dimensional vector space V and a subspace W ⊂ V , any linear transformationf : V → W with f |W = id is an example of a map as in the lemma. We call this aprojection from V to W . We can build a projection explicitly as follows: Choose a basisBV = (w1, . . . , wdW , vdW+1, . . . , vdV ) with BW = (w1, . . . , wdW ) a basis for W . Then we definef : V → W to be the linear transformation associated to the matrix(

IddW 00 0

)where the upper-left is the identity dW × dW matrix, and there are zeros everywhere else.

We see that it is not hard to construct complementary subspaces, but we need to constructcomplementary subspaces that are also preserved by G. For this, we need a way to constructG-linear projections.

Definition 2.26. Let V,W be finite-dimensional complex vector spaces. We define

Hom(V,W ) := {f : V → W : f isalineartransformation}

Since we can add and subtract linear transformations and scale them by complex numbers,Hom(V,W ) is again a finite-dimensional vector space, of dimension dimV · dimW .

If V and W are additionally representations of a finite group G, we can make Hom(V,W )into a representation of G:

Definition 2.27. Let (V, ρV ) and (W, ρW ) be finite-dimensional complex representations ofa finite group G. We define an action of G on Hom(V,W ) by setting g · f : V → W to be(g · f)(v) = (ρW (g) ◦ f ◦ ρV (g−1))(v) for all g ∈ G, v ∈ V .

It is straightforward to check that for each g, this is a linear transformation Hom(V,W )→Hom(V,W ).

Note that we do not require elements of Hom(V,W ) to be G-linear, even when V and W arerepresentations of G. In fact,

Lemma 2.28. A linear transformation f : V → W is G-linear if and only if g · f = f .

Proof. Exercise.

Given a representation (V, ρV ), the subset V G := {v ∈ G : ρV (g)(v) = vforallg ∈ G} is atrivial subrepresentation of V . Thus, the inclusion V G → V is a G-linear map. We can alsodefine a G-linear homomorphism the other way:

Lemma 2.29. Define e : V → V G by setting e(v) := 1|G|∑

g∈G ρV (g)(v) is G-linear. Inaddition, it satisfies e ◦ e = e.

11

Page 12: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Proof. We first check that for any v ∈ V , e(v) ∈ V G. Let g′ ∈ G. Then

ρV (g′)(e(v)) =1

|G|∑g∈G

ρV (g′)ρV (g)(v) =1

|G|∑g∈G

ρV (g′g)(v) =1

|G|∑g∈G

ρV (g)(v)

where the last equality follows because the set {g′g}g∈G = G.

Furthermore, if v ∈ V G, then

e(v) =1

|G|∑g∈G

ρV (g)(v) =1

|G|∑g∈G

v =1

|G|· |G| · v = v

Thus, e : V → V G is the identity on V G, so e ◦ e = e.

It remains to check that e isG-linear. The first two parts of this proof imply that ρV (g′)(e(v)) =e(v) for all g′ ∈ G and all v ∈ V , so we need to show that e(ρV (g′)v) = e(v) for all g′ ∈ Gand all v ∈ V . But

e(ρV (g)v) =1

|G|∑g∈G

ρV (g)(ρV (g′)v) =1

|G|∑g∈G

ρV (gg′)(v) =1

|G|∑g∈G

ρV (g)(v) = e(v)

where we use again that {gg′}g∈G = G.

of 2.21. Let (V, ρV ) be a representation of a finite group G, and let W be a subrepresentation.

Choose a projection (of vector spaces, not representations) f : V → W . There is no reason

for f to be G-linear, so we modify it.

Define f : V → W by setting

f := e(f) =1

|G|g · f

We can write f down more explicitly, by unwinding the definition of the action of G onHom(V,W ):

f(v) =1

|G|∑g∈G

(ρV (g) ◦ f ◦ ρV (g−1))(v)

By construction, f ∈ Hom(V,W )G, so f is G-linear.

We need to check further that f |W : W → W is the identity map. But if w ∈ W , then

ρV (g−1)(w) ∈ W since the action of G preserves W . We also chose f : V → W to be theidentity on W , so

f(w) =1

|G|∑g∈G

(ρV (g) ◦ f ◦ ρV (g−1))(w) =1

|G|∑g∈G

ρV (g)(ρV (g−1)(w)) =1

|G|∑g∈G

w = w

as desired.

Finally, we set W ′ := ker(f). Since f is G-linear, this is a subrepresentation of V com-plementary to im(f). Since f ◦ f = f and im(f) = W , W and W ′ are complementarysubrepresentations of V and we are done.

12

Page 13: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

2.7 Schur’s lemma

Now that we have shown that representations of finite groups can be decomposed as the directsum of irreducible representations, we wish to study homomorphisms between irreduciblerepresentations in more detail. The key result is the following:

Theorem 2.30 (Schur’s Lemma). Let (V, ρV ) and (W, ρW ) be irreducible finite-dimensionalcomplex representations of a finite group G.

1. If f : V → W is G-linear, then f is either 0 or an isomorphism.

2. Let f : V → V be a G-linear map. Then f = λ1V for some λ ∈ C. That is, f ismultiplication by a scalar.

Proof. 1. Recall that if f is G-linear, both ker(f) ⊂ V and im(f) ⊂ W are subrepresen-tations. Since V and W are assumed irreducible, we must have either ker(f) = {0}(in which case f is injective) or ker(f) = V (in which case f = 0), and we must haveeither im(f) = 0 (in which case f = 0) or im(f) = W (in which case f is surjective).So either f = 0 or f is both injective and surjective, i.e., an isomorphism.

2. Certainly for any λ ∈ C, λ1V : V → V is G-linear. Now for any λ ∈ C and any G-linear f : V → V , consider the linear transformation λ1V − f : V → V . This is clearlyG-linear as well, so either λ1V − f = 0 or λ1V − f is an isomorphism. But λ1V − ffails to be an isomorphism if and only if λ is a root of the characteristic polynomialof the matrix representing f (with respect to some basis of V ). Since any polynomialover C of degree d has d (not necessarily distinct) roots, there is some λ ∈ C such thatf = λ1V .

We can use Schur’s lemma to classify representations of finite abelian groups:

Lemma 2.31. Let G be a finite abelian group and let (V, ρV ) be a finite-dimensional complexrepresentation. For any g ∈ G, ρV (g) : V → V is a G-linear homomorphism.

Proof. We need to check that for any g′ ∈ G, ρV (g′) ◦ ρV (g) = ρV (g) ◦ ρV (g′). But thisfollows because G is abelian.

Corollary 2.32. Let (V, ρV ) be a non-zero irreducible finite-dimensional complex represen-tation of a finite abelian group G. Then V is 1-dimensional.

Proof. For each element g ∈ G, ρV (g) : V → V is G-linear. Since V is irreducible, The-orem 2.30 implies that there is some λg ∈ C such that ρV (g) = λ1V . But this impliesthat every element of V is an eigenvector for every ρV (g), and therefore generates a 1-dimensional subrepresentation of V . But this contradicts the irreducibility of V unless V isitself 1-dimensional.

13

Page 14: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

3 Uniqueness of decomposition

Now that we have Schur’s lemma, we are almost ready to prove that the decomposition ofrepresentations into irreducibles (“irreps”) is unique. That is, we would like to prove

Theorem 3.1. Let (V, ρV ) be a finite-dimensional complex representation of a finite groupG, and let V ∼= V1⊕· · ·⊕Vr and V ∼= V ′1⊕· · ·⊕V ′r′ be two decompositions of V into irreduciblesubrepresentations. Then r = r′ and the decompositions are the same, up to reordering theirreducible subrepresentations.

Proof. We first show that for each i, one of the V ′j is isomorphic to Vi. This shows that thesets of irreducible subrepresentations are the same. Let ej : V → V ′j denote projection to thejth factor; this is G-linear by construction, and e1 + . . . + er′ : V → V is the identity map.Now by assumption, there is a non-zero G-linear map fi : Vi → V , so (e1+· · ·+er′)◦fi is non-zero and G-linear. If each ej ◦ fi were zero, their sum would be, as well (since the images ofthe ej are linearly independent). Thus, there is some non-zero G-linear map ej ◦fi : Vi → V ′j ,and since V ′j is irreducible, it must be an isomorphism.

We rewrite our two decompositions as V ∼= ⊕iV rii and V ∼= ⊕iV si

i , where the Vi are distinctirreducible representations of G. We need to prove that ri = si for all i. But for any j,

Hom(Vj,⊕iV rii )G ∼= Crj

and similarlyHom(Vj,⊕iV si

i )G ∼= Csj

Thus, rj = sj for all j.

We need to say a bit more about homomorphisms of representations first.

Lemma 3.2. Let V , V ′, and W be vector spaces. Then we have natural isomorphisms

1. Hom(W,V ⊕ V ′) ∼= Hom(W,V )⊕ Hom(W,V ′)

2. Hom(V ⊕ V ′,W ) ∼= Hom(V,W )⊕ Hom(V ′,W )

If V , V ′, and W carry representations of a finite group G, these are isomorphisms of repre-sentations.

Proof. First of all, note that all of these spaces are complex vector spaces with dimensiondimW · (dimV + dimV ′). Second, recall that we have homomorphisms

ViV�eV

V ⊕ V ′eV ′

�iV ′

V ′

14

Page 15: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

given by

iV (v) = (v, 0); eV (v, v′) = v; iV ′(v′) = (0, v′); eV ′(v, v

′) = v′

It follows that iV ◦ eV + iV ′ ◦ eV ′ = 1 : V ⊕ V ′ → V ⊕ V ′. Moreover, if V , V ′, and W arerepresentations, all four of these maps are G-linear.

1. To define a linear transformation P : Hom(W,V ⊕ V ′) → Hom(W,V ) ⊕ Hom(W,V ′),we choose f ∈ Hom(W,V ⊕V ′). Then eV ◦ f ∈ Hom(W,V ) and eV ′ ◦ f ∈ Hom(W,V ′),and we send f to (eV ◦ f, eV ′ ◦ f) ∈ Hom(W,V ) ⊕ Hom(W,V ′). On the other hand,given (h, h′) ∈ Hom(W,V )⊕Hom(W,V ′), we may define Q(h, h′) Hom(W,V ⊕ V ′) viaQ(g, g′) := iV ◦ g + iV ′ ◦ g′. Composing these two linear transformations, we see

(Q ◦ P )(f) = iV ◦ (eV ◦ f) + iV ′(eV ′ ◦ f) = 1V⊕V ′ ◦ f = f

Therefore, our map Hom(W,V ⊕ V ′) → Hom(W,V )⊕ Hom(W,V ′) is injective. Sinceboth sides are vector spaces of the same dimension, this implies that our map is anisomorphism.

We need to check that if V , V ′, and W are representations of G, then this isomorphismis G-linear. Recall that if g ∈ G, then ρHom(W,V⊕V ′)(f) = ρV⊕V ′(g) ◦ f ◦ ρW (g−1). Then

P (ρHom(W,V⊕V ′)(f)) = P (ρV⊕V ′(g) ◦ f ◦ ρW (g−1))

= (eV ◦ (ρV⊕V ′(g) ◦ f ◦ ρW (g−1)), eV ′ ◦ (ρV⊕V ′(g) ◦ f ◦ ρW (g−1)))

= (ρV (g) ◦ eV ◦ f ◦ ρW (g−1), ρV ′(g) ◦ eV ′ ◦ f ◦ ρW (g−1))

= (ρHom(W,V )(g)(eV ◦ f), ρHom(W,V ′)(g)(eV ′ ◦ f))

= ρHom(W,V )⊕Hom(W,V ′)(g)(eV ◦ f, eV ′ ◦ f)

= ρHom(W,V )⊕Hom(W,V ′)(g) ◦ P (f)

so P is G-linear.

2. To define a linear transformation S : Hom(V ⊕ V ′,W )→ Hom(V,W )⊕ Hom(V ′,W ),we choose f ∈ Hom(V ⊕V ′,W ). Then f ◦ iV ∈ Hom(V,W ) and f ◦ iV ′ ∈ Hom(V ′,W ),and we send f to (f ◦ iV , f ◦ iV ′) ∈ Hom(V,W ) ⊕ Hom(V ′,W ). On the other hand,given (h, h′) ∈ Hom(V,W ) ⊕ Hom(V ′,W ), g ◦ eV , g′ ◦ eV ′ ∈ Hom(V ⊕ V ′,W ), so wemay define a linear transformation T : Hom(V,W )⊕Hom(V ′,W )→ Hom(V ⊕V ′,W )by sending (g, g′) 7→ g ◦ eV + g′ ◦ eV ′ . Composing these two maps, we see

(f ◦ iV ) ◦ eV + (f ◦ iV ′) ◦ eV ′ = f ◦ 1V⊕V ′ = f

Therefore, our map Hom(V ⊕ V ′,W ) → Hom(V,W ) ⊕ Hom(V ′,W ) is injective, andsince the two vector spaces have the same dimension, it is an isomorphism.

15

Page 16: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

We need to check that S is G-linear when our vector spaces are representations. Indeed,

S(ρHom(V⊕V ′),W (g)(f)) = S(ρW (g) ◦ f ◦ ρV⊕V ′(g−1))= (ρW (g) ◦ f ◦ ρV⊕V ′(g−1) ◦ iV , ρW (g) ◦ f ◦ ρV⊕V ′(g−1) ◦ iV ′)= (ρW (g) ◦ f ◦ iV ◦ ρV (g−1), ρW (g) ◦ f ◦ iV ′ ◦ ρV ′(g−1))= (ρHom(V,W )(g)(f ◦ iV ), ρHom(V ′,W )(g)(f ◦ iV ′)= ρHom(V,W )⊕Hom(V ′,W )(g)(f ◦ iV , f ◦ iV ′= ρHom(V,W )⊕Hom(V ′,W )(g) ◦ S(f)

so S is G-linear, as desired.

3.1 The Regular Representation

Let G be a finite group. Recall the definition of the regular representation of G: Vreg has abasis {bg}g∈G indexed by elements of G, and the action of G is given by ρreg(g

′)(bg) = bg′g.We wish to study the decomposition of Vreg into irreducible representations. We can saymore:

Theorem 3.3. Let Vreg ∼= V1⊕ · · ·Vr be the decomposition of the regular representation intoirreducible subrepresentations. Then if W is an irreducible representation of G, the numberof Vi isomorphic to W is dimW .

This has an important consequence:

Corollary 3.4. A finite group G has only finitely many irreducible representations, up toisomorphism, and they all have dimension at most |G|.

It also has a numerical consequence:

Corollary 3.5. |G| =∑

W (dimW )2, where the sum runs over the irreducible representationsof G.

This lets us sharpen the previous corollary, because it implies that the irreducible represen-tations have dimension at most

√|G|.

Example 3.6. Consider G = S3. We can write down all of its irreducible representations.We saw on the first problem sheet that the 3-dimensional permutation representation is thedirect sum of the trivial representation and a 2-dimensional representation. Since the per-mutation does not contain any non-trivial 1-dimensional representation, this 2-dimensionalrepresentation is irreducible. There is another 1-dimensional representation of S3, namely thesign representation, which sends each transposition to −1. So we have two 1-dimensional rep-resentations and one irreducible 2-dimensional representation. Since |S3| = 6 = 12 + 12 + 22,we have found every irreducible representation of S3.

16

Page 17: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Example 3.7. Let G = D8. On the first problem sheet, you found that there are four1-dimensional representations of D8. If Vreg ∼= V ⊕d11 ⊕ · · · ⊕ V ⊕drr with di = dimVi, then wehave 8 = d21 + . . .+ d2r. The only solutions to this equation (with di ≥ 1 and di ∈ N) are forr = 8 and di = 1, r = 2 and di = 2, and r = 5 with one di = 2 and the rest equal to 1. Sincethere are exactly four 1-dimensional representations, the last is the case.

Lemma 3.8. Let W be an irreducible representation of G. Define the evaluation mapev : Hom(Vreg,W )G → W via f 7→ f(be). This map is an isomorphism.

Proof. First we prove that ev is injective. So suppose that ev(f) = f(be) = 0 for some f ∈Hom(Vreg,W )G. But thenf : Vreg → W is G-linear, so f(bg) = f(ρreg(g)be) = ρW (g)f(be) = 0for all g ∈ G, so f = 0.

Now we prove that this map is surjective. Choose some w ∈ W ; we will construct a G-linearf : Vreg → W with f(be) = w. But it is enough to specify f(bg) for all g ∈ G, so we setf(bg) := ρW (g)(w) and extend by linearity. We check that f is G-linear: for g′ ∈ G and{ag}g∈G,

(f ◦ ρreg(g′))(bg) = f(bg′g) = ρW (g′g)(w)

(ρW (g′) ◦ f)(bg) = ρW (g′)(ρW (g)(w)) = ρW (g′g)(w)

Since the two maps agree on basis vectors, they are the same.

4 Duals and Tensor Products

Recall the following definition:

Definition 4.1. Let V be a vector space. The dual vector space is V ∗ := Hom(V,C).

This is a special case of the construction Hom(V,W ), with W = C. It is clear that dimV ∗ =dimV ; in fact, if we choose a basis (v1, . . . , vd) for V , we may define a dual basis (f1, . . . , fd),where

fi(vj) =

{1 if i = j

0 if i 6= j

Suppose that (V, ρV ) is a representation of G. Then we have defined a representation (V ∗, ρ∗V )by setting

ρ∗V (g) := ρHom(V,C)(g) : f 7→ f ◦ ρV (g−1)

We call this the dual representation.

Lemma 4.2. Let V be a finite-dimensional vector space. Then there is a map h : V → (V ∗)∗,defined by letting h(v) be the linear map h(v) : V ∗ → C with h(v)(f) = f(v). If V is arepresentation of a finite group G, then this isomorphism is G-linear.

17

Page 18: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Example 4.3. Let ρ : G → GL1(C) be a 1-dimensional matrix representation. If we writedown ρ∗, we get

ρ∗(g)(f) = f ◦ ρV (g−1)) = ρV (g−1) · fsince ρ∗(g−1) is just multiplication by a scalar. So the matrix for ρ∗ (with respect to anybasis of C) is ρ−1.

We wish to consider dual matrix representations more generally. We may choose a basis forV and view it as the set of column vectors of length d; then V ∗ is the set of row vectors oflength d.

Proposition 4.4. Let (V, ρV ) be a representation of G, let B be a basis of V , and let B∗be the dual basis for V ∗. Then the matrix representation ρ∗V,B∗ : G → GLd(C) is given byρ∗V,B∗(g) = (ρV (g)−1)t, where the t refers to taking the transpose.

Proof. Let B = (v1, . . . , vd) and let B∗ = (f1, . . . , fd). Then for any g ∈ G, if ρV,B(g−1) =(gkj),

ρ∗V,B∗(g)(fi)(vj) = fi(ρV (g−1)(vj)) = fi(d∑

k=1

gkjvk) = gij

It follows that ρ∗V,B∗(g)(fi) =∑

j gijfj.

Proposition 4.5. A representation (V, ρV ) is irreducible if and only if (V ∗, ρ∗V ) is irreducible.

Proof. Suppose V ∼= W⊕W ′, whereW , W ′ are also representations. Then V ∗ = Hom(V,C) ∼=Hom(W,C)⊕Hom(W ′,C), so V ∗ is also decomposable (since dimW ∗ = dimW and dimW ′∗ =dimW ′). Since (V ∗)∗ ∼= V via a G-linear isomorphism, the same argument shows that if V ∗

is reducible, so is V .

Thus, given an irreducible representation of G, we may take its dual to obtain another one.However, V ∗ may very well be isomorphic to V . For example, consider the 2-dimensional

irreducible representation (V2, ρ2) of S3. Explicitly, ρ2(123) =(−1/2 −

√3/2√

3/2 −1/2

)and ρ2(23) =

( 1 00 −1 ). But these matrices are equal to their own inverse transposes, so V = V ∗.

Now we turn to tensor products. They can be fairly abstract, but we will give a morehands-on discussion.

Definition 4.6. Let V and W be finite-dimensional complex vector spaces with bases(v1, . . . , vdV ) and (w1, . . . , wdW ), respectively. Then we define V ⊗W to be the vector spacewith basis given by the symbols {vi ⊗ wj}. It has dimension dV · dW .

If v =∑

i aivi and w =∑

j bjwj, we define v⊗w :=∑

i,j aibj(vi⊗wj). Be careful: not everyelement of V ⊗W is of the form v ⊗ w.

If (V, ρV ) and (W, ρW ) are representations of G, we can also define a representation (V ⊗W, ρV⊗W ):

18

Page 19: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Definition 4.7. For g ∈ G, we define ρV⊗W (g) : V ⊗W → V ⊗W by setting ρV⊗W (g)(vi⊗wt) := ρV (g)(vi)⊗ ρW (g)(wt).

If ρV (g) has matrix M and ρW (g) has matrix N (with respect to the chosen bases), thenρV⊗W (g) sends

vi ⊗ wt 7→

(dV∑j=1

Mjivj

)⊗

(dW∑s=1

Nstws

)=∑j,s

MjiNstvj ⊗ ws

So in terms of matrices, ρV⊗W (g) is given by a dV dW × dV dW matrix we call M ⊗N , whoseentries are [M ⊗N ](j,s),(s,t) = MjiNst.

We have given a construction in terms of bases, but we would like to know that if we changeour choice of bases of V and W , we get an equivalent representation. Fortunately, we cangive another description of V ⊗W :

Proposition 4.8. The vector space V ⊗W is isomorphic to Hom(V ∗,W ). If V and W arerepresentations of G, there is a G-linear isomorphism.

Proof. Let (f1, . . . , fdV ) be the dual basis for V ∗. Then we define hit : V ∗ → W by settinghti(fi) = wt, and hti(fj) = 0 if i 6= j; the set {hit} forms a basis for Hom(V ∗,W ). Concretely,Hom(V ∗,W ) can be viewed as the vector space of dW × dV matrices, and hti corresponds tothe linear transformation with a 1 in the (t, i)-entry and 0s elsewhere.

Now we define a linear transformation Hom(V ∗,W ) → V ⊗W via hti 7→ vi ⊗ wt. This isclearly an isomorphism, because Hom(V ∗,W ) and V ⊗W have the same dimension, and weare simply sending basis vectors to distinct basis vectors.

It remains to check that this linear transformation is G-linear. It is enough to compute thematrix representing ρHom(V ∗,W )(g) with respect to {hti}. Suppose that ρV (g) is representedby the matrix M (with respect to the chosen basis). Recall also that ρHom(V ∗,W )(g) acts viahti 7→ ρW (g) ◦hti ◦ ρV ∗(g−1). Moreover, the matrix for ρV ∗(g

−1) with respect to {fi} is givenby M t, so ρV ∗(g

−1) sends fk 7→∑

jMkjfj. So hti ◦ ρV ∗(g−1) sends fk 7→ Mkiwt. If ρW (g)

is represented by the matrix N (with respect to {ws}), then ρW (g) ◦ hti ◦ ρV ∗(g−1) sends

fk 7→Mki

(∑j Nstws

).

It follows that ρHom(V ∗,W )(g) sends hti to∑

j,sMjiNsthsj, which is the formula we wrote downfor ρV ∗⊗W (g).

Example 4.9. Suppose that V is 1-dimensional, so that ρV (g) is just multiplication by ascalar λg, for every g ∈ G. Then if the matrix representing ρW (g) with respect to some basisis Ng, the matrix representing ρV⊗W (g) is λgNg.

In particular, if V and W are both 1-dimensional, tensoring the representations simplymultiplies them.

19

Page 20: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Suppose G = Sn, and ρV and ρW are both the sign representation. Then ρV⊗W is the trivialrepresentation.

5 Character theory

5.1 Definitions and basic properties

There are two definitions of the word “character” you might come across in representationtheory. The first is that a character is a 1-dimensional representation. We will not use thisterminology in this course, to avoid confusion.

Definition 5.1. Let M = (Mij) be a d× d matrix. Then the trace of M is Tr(M) =∑

iMii

(the sum of the diagonal entries).

If f : V → V is a linear transformation, we choose a basis B of V , and define Tr(f) :=Tr([f ]B).

Recall that if M,N are d×d matrices, then Tr(MN) = Tr(NM). As a result, if P ∈ GLd(C)is invertible, then Tr(PMP−1) = Tr(M). Thus, the trace of a linear transformation isindependent of the chosen basis.

Definition 5.2. Let (V, ρV ) be a finite-dimensional complex representation of a finite groupG. The character associated to (V, ρV ) is the function χV : G → C given by χV (g) =Tr(ρV (g)).

This is generally not a homomorphism, because traces are not multiplicative.

Example 5.3. If (V, ρV ) is a 1-dimensional representation of G, then χV (g) = ρV (g) for allg ∈ G.

Example 5.4. Let (V, ρV ) be the irreducible 2-dimensional representation of S3. ThenχV (1) = 2, χV (123) = −1, and χV (23) = 0.

Lemma 5.5. Let (V, ρV ) be a representation of G. Then for all g, h ∈ G, χV (hgh−1) =χV (g).

Proof. Since ρV (h) is invertible,

χV (hgh−1) = Tr(ρV (h)ρV (g)ρV (h−1) = Tr(ρV (g)) = χV (g)

Since (12) and (13) are conjugate to (23), and (132) is conjugate to (123), in the previousexample we actually know all values of χV .

20

Page 21: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Lemma 5.6. Suppose (V, ρV ) and (W, ρW ) are isomorphic representations of a finite groupG. Then χV = χW .

Proof. We choose bases BV and BW for V and W , respectively. This choice of bases gives uscorresponding matrix representations ρV,BV , ρW,BW : G⇒ GLd(C). Since our representationsare isomorphic, there is some matrix P ∈ GLd(C) such that ρV,BV (g) = PρW,BW (g)P−1 forall g ∈ G.

We are going to prove the converse, that is, that if (V, ρV ) and (W, ρW ) are representationsof G such that χV = χW , then (V, ρV ) ∼= (W, ρW ).

Proposition 5.7. Let (V, ρV ) be a representation of G.

1. χV (e) = dimV

2. χV (g−1) = χ(g) (where the bar refers to complex conjugation)

3. For all g ∈ G, |χV (g)| ≤ dim(V ), with equality if and only if ρV (g) = λ1 for someλ ∈ C.

Proof. 1. Since ρV (e) = 1, χV (e) = Tr(1) = dimV .

2. Since ρV (g) is an invertible matrix with finite order (if gn = e, then ρV (g)n = 1), wecan choose a basis B of V so that the associated matrix ρV,B(g) is a diagonal matrix,with diagonal entries λ1, . . . , λd. The λi must be roots of unity, so λ−1 = λi. Thus,ρV,B(g−1) is also a diagonal matrix, with entries λ1, . . . , λd. It follows that

χV (g−1) = λ−11 + . . .+ λ−1d = λ1 + . . .+ λd = χV (g)

3. We may again write χV (g) = λ1 + . . . + λd, where the λi are roots of unity. By thetriangle inequality,

|χV (g)| ≤∑i

|λi| = d

so |χV (g)| ≤ dimV . We have equality if and only if λi = rieiθ for a single argument

θ ∈ [0, 2π) and ri ∈ R. Since the λi all have absolute value 1, ri = 1 for all i, so wehave the desired equality if and only if λ1 = λ2 = . . . = λd, in which case ρV (g) ismultiplication by a scalar.

Corollary 5.8. If (V, ρV ) is a representation of G, then ρV (g) = 1 if and only if χV (g) =dimV .

21

Page 22: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Proof. It is clear that if ρV (g) = 1, then χV (g) = dimV .

If χV (g) = dimV , then |χV (g)| = dimV , so the previous proposition implies that ρV (g) ismultiplication by λ ∈ C. Then χV (g) = λ · dimV , and if χV (g) = dimV we must haveλ = 1.

Thus, the character of a representation detects whether or not a representation is faithful.

Proposition 5.9. Let (V, ρV ) and (W, ρW ) be representations of G.

1. χV⊕W (g) = χV (g) + χW (g)

2. χV⊗W (g) = χV (g)χW (g)

3. χV ∗(g) = χV (g)

4. χHom(V,W )(g) = χV (g)χW (g)

Proof. Choose bases for V and W , and let M and N be the matrices associated to ρV (g)and ρW (g), respectively.

1. The matrix for (ρV ⊕ ρW )(g) with respect to the chosen bases is the block-diagonalmatrix (

M 00 N

)so χV⊕W (g) = Tr (M 0

0 N ) = Tr(M) + Tr(N) = χV (g) + χW (g).

2. The matrix M ⊗N has entries [M ⊗N ](j,s),(i,t) = MjiNst, so its trace is

Tr(M ⊗N) =∑i,t

[M ⊗N ](i,t),(i,t) =∑i,t

MiiNtt = Tr(M) Tr(N)

so the result follows.

3. Recall that the matrix for ρV ∗(g) with respect to the dual basis is (M−1)t. Takingtransposes preserves traces, so

χV ∗(g) = Tr(ρV ∗(g)) = Tr((M−1)t) = Tr(M−1) = χV (g−1) = χV (g)

4. This follows from the isomorphism (of representations) V ⊗W ∼= Hom(V ∗,W ) and theprevious two parts.

22

Page 23: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

5.1.1 The regular character

Let χreg denote the character of the regular representation.

Proposition 5.10. For any finite group G, χreg(e) = |G| and χreg(g) = 0 for g 6= e.In addition, χreg(g) =

∑i diχVi(g), where the sum ranges over irreducible representations

(Vi, ρVi) and di := dimVi.

Proof. Recall that χreg(e) = dimVreg, since this is the case for every representation. SincedimVreg = |G|, χreg(e) = |G|.Recall that the regular representation is given by ρreg(g)(bh) = bgh, so the matrix the (bgh, bh)entries equal to 1 and 0 everywhere else. Thus, the trace is 0 unless there is some h suchthat gh = h. But this is impossible unless g = e.

For the last part, recall that Vreg ∼= ⊕iV ⊕ dimVii . Since χV⊕W = χV + χW , this implies that

χreg(g) =∑

i diχVi .

Example 5.11. If G = Cn, then we may write ρreg =∑

k χk, where χk = ρk is the 1-dimensional representation g 7→ e2πik/n. This implies that

∑n−1k=0 e

2πik/n = 0, and that forany integer s ∈ [1, n− 1],

∑n−1k=0 e

2πiks/n = 0 (because both sides are χreg(gs)).

5.2 Inner products of characters

The set of characters of representations turns out to have a lot of structure, which is whatmakes the concept useful.

Definition 5.12. Let C(G) denote the set of functions f : G → C, and let Ccl(G) denotethe set of functions f : G → C which are constant on each conjugacy class of G, i.e., suchthat f(hgh−1) = f(g) for all g, h ∈ G.

Both C(G) and Ccl(G) are finite-dimensional complex vector spaces: Given a function f :G → C, we can scale it by setting (λf)(g) := λ · f(g) for λ ∈ C, and given two functionsf, f ′ : G⇒ C, we can add them by setting (f + f ′)(g) := f(g) + f ′(g).

There is a basis for C(G) given by the functions δg : G→ C, which is defined by δg(g) = 1and δg(h) = 0 for h 6= g.

If f, f ′ ∈ C(G) are constant on conjugacy classes of G, then so are λf and f + f ′, so thesame arguments imply that Ccl(G) is a vector space (and indeed, a subspace of C(G)).

Characters can be viewed as elements of Ccl(G). The dimension of C(G) is |G|, and thedimension of Ccl(G) is equal to the number of conjugacy classes of G.

Using our choice of basis for C(G), we have an inner product on C(G):

23

Page 24: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Definition 5.13. Let ξ, ψ : G⇒ G be elements of C(G). We define

〈ξ, ψ〉 :=1

|G|∑g∈G

ξ(g)ψ(g)

where ψ(g) means complex conjugation.

Note that 〈ξ, ψ〉 6= 〈ψ, ξ〉. In fact, 〈ξ, ψ〉 = 〈ψ, ξ〉, and 〈ξ, ψ〉 is linear in the first factor andconjugate-linear in the second factor, i.e., 〈λξ, µψ〉 = λµ〈ξ, ψ〉 for any λ, µ ∈ C. Furthermore,〈ξ, ξ〉 = 1

|G|∑

g|ξ(g)|2 ≥ 0 with equality if and only if ξ = 0.

This is the list of properties defining a Hermitian inner product. Notice that our basis is notquite orthonormal with respect to this inner product:

〈δg, δh〉 =

{0 if g 6= h1|G| if g = h

Our first goal is the following theorem:

Theorem 5.14. Let (V, ρV ) and (W, ρW ) be representations of G and let χV and χW be theassociated characters. Then

〈χV , χW 〉 = dim Hom(V,W )G

Before we start proving it, we record some corollaries:

Corollary 5.15. If (V, ρV ) and (W, ρW ) are irreducible representations of G, then 〈χV , χW 〉 =1 if they are isomorphic, and 〈χV , χW 〉 = 0 otherwise.

Proof. This follows from Schur’s lemma, combined with the theorem.

Corollary 5.16. Let χ1, . . . , χr denote the irreducible characters of G. Then

1. χ1, . . . , χr are orthonormal elements of Ccl(G).

2. We have an inequality: the number of conjugacy classes of G is at least r.

3. If (V, ρV ) is any representation of G, then V ∼= ⊕iV 〈χV ,χi〉i and χV =

∑i〈χV , χi〉χi.

4. If (V, ρV ) is a representation of G, it is irreducible if and only if 〈V, V 〉 = 1.

Proof. 1. This follows from the previous corollary.

2. Since the χi are orthonormal, they are linearly independent. Therefore dimCcl(G) ≥ r,and dimCcl(G) is the number of conjugacy classes of G.

24

Page 25: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

3. Recall that V ∼= ⊕iV dimHom(V,Vi)G

i . The result then follows from the theorem.

4. We may write V ∼= ⊕V ⊕mii for some multiplicities mi. Then

〈χV , χV 〉 =∑i,j

mimj〈χi, χj〉 =∑i

m2i

Then 〈χV , χV 〉 = 1 exactly when one of the mi is 1 and the rest are 0.

Next we give a pair of useful lemmas.

Lemma 5.17. Let V be a vector space. Then f 7→ Tr(f) gives us a linear map Tr :Hom(V, V ) → C. That is, if f1, f2 ∈ Hom(V, V ) and λ1, λ2 ∈ C, then Tr(λ1f1 + λ2f2) =λ1 Tr(f1) + λ2 Tr(f2).

Proof. Pick a basis for V , and let M and N be the matrices representing f1 and f2, respec-tively. Then

Tr(λ1f1 + λ2f2) = Tr(λ1M + λ2N) =d∑i=1

(λ1Mii + λ2Nii)

= λ1

d∑i=1

Mii + λ2

d∑i=1

Nii = λ1 Tr(f1) + λ2 Tr(f2)

Lemma 5.18. Let V be a vector space, with a subspace W ⊂ V , and let π : V → W be aprojection to W (so π|W = 1W ). Then Tr(π) = dimW .

Proof. Recall that W and ker(π) are complementary subspaces of V , so the natural mapW ⊕ ker(π) → V is an isomorphism. If we choose bases BW and Bπ for W and ker(π),respectively, their union is a basis for V . If we write π as a matrix with respect to this basis,we get a matrix of the form (

1W 00 0

)Therefore, Tr(π) = Tr(1W ) = dimW .

Proof of Theorem. Recall that we can view Hom(V,W ) as a representation of G, and thespace of G-linear maps is its invariant subspace: Hom(V,W )G ⊂ Hom(V,W ). Then we con-structed a projection map e : Hom(V,W )→ Hom(V,W )G via f 7→ 1

|G|∑

g∈G ρHom(V,W )(g)(f).

25

Page 26: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

We are interested in computing dim Hom(V,W )G, and it suffices to compute Tr(e). Butsince e : Hom(V,W ) → Hom(V,W )G is a linear combination of the maps ρHom(V,W )(g)e :Hom(V,W )→ Hom(V,W )G, we see that

Tr(e) =1

|G|∑g∈G

Tr(ρHom(V,W )(g))

However, Tr(ρHom(V,W )(g)) = χHom(V,W )(g), and we proved earlier that χHom(V,W )(g) =

χV (g)χW (g). Therefore,

Tr(e) =1

|G|∑g∈G

χV (g)χW (g) = 〈χW , χV 〉

Since in this case 〈χW , χV 〉 is a real number, 〈χW , χV 〉 = 〈χV , χW 〉, as desired.

Example 5.19. Let G = C4 = 〈g : g4 = e〉. Consider the 2-dimensional matrix repre-

sentation ρ : G → GL2(C) given by ρ(g) = ( i 21 −i ). Since ρ(g2) = ( i 2

1 −i )2

= ( 1 00 1 ), we

haveχ(e) = 2 χ(g) = 0 χ(g2) = 2 χ(g3) = 0

Now the irreducible characters of G are those coming from the 1-dimensional matrix repre-sentations ρk : G→ GL1(C); ρk(g) := ik for k = 0, 1, 2, 3. So we have

χk(e) = 1 χk(g) = ik χk(g2) = (−1)k χk(g

3) = (−i)k

Now we compute 〈χ, χk〉:

〈χ, χk〉 =1

4

(2 · 1 + 0 · ik + 2 · (−1)k + 0 · (−i)k

)=

1

4

(2 + 2 · (−1)k

)=

1

2

(1 + (−1)k

)=

{1 if k = 0, 2

0 if k = 1, 3

Thus, ρ is isomorphic to the direct sum of ρ0 and ρ2.

Example 5.20. Let G = D8 = 〈s, t : s4 = t2 = e, tst = s−1〉, and consider again the4-dimensional representation (V, ρV ) coming from its action on the vertices of a square.Explicitly,

ρV (s) =

0 0 0 11 0 0 00 1 0 00 0 1 0

ρV (t) =

0 0 0 10 0 1 00 1 0 01 0 0 0

The conjugacy classes of D8 are {e}, {s, s−1}, {s2}, {t, s2t}, {st, s3t}, and since characters areconstant on conjugacy classes, we can write down the values of χV :

{e} {s, s−1} {s2} {t, s2t} {st, s3t}χV (g) 4 0 0 0 2

26

Page 27: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

We first compute

〈χV , χV 〉 =1

8

∑g∈D8

|χV (g)|2 =1

8

(42 + 2 · 02 + 02 + 2 · 02 + 2 · 22

)= 3

Thus, V is reducible. There is an “easy” 1-dimensional subrepresentation, namely the one

generated by

(1111

), which is isomorphic to the trivial representation ρtriv. Indeed,

〈χV , χtriv〉 =1

8((4 · 1) + 2 · (0 · 1) + (0 · 1) + 2 · (0 · 1) + 2 · (2 · 1)) = 1

so Vtriv is isomorphic to a subrepresentation of V , and in fact dim Hom(Vtriv, V )D8 = 1, so

the subspace generated by

(1111

)is the largest trivial subrepresentation of V .

There is a 3-dimensional complement W ⊂ V , so we can expand our table (using the factthat χtriv + χW = χV ):

{e} {s, s−1} {s2} {t, s2t} {st, s3t}χV (g) 4 0 0 0 2χtriv(g) 1 1 1 1 1χW (g) 3 −1 −1 −1 1

Now we compute

〈χW , χW 〉 =1

8

∑g∈D8

|χW (g)|2 =1

8

(32 + 2 · (−1)2 + (−1)2 + 2 · (−1)2 + 2 · 12

)= 2

so W is also reducible. It must therefore have a 1-dimensional subrepresentation, and thecharacters of the non-trivial 1-dimensional subrepresentations of D8 are

{e} {s, s−1} {s2} {t, s2t} {st, s3t}χ+−(g) 1 1 1 −1 −1χ−+(g) 1 −1 1 1 −1χ−−(g) 1 −1 1 −1 1

Then we may compute 〈χW , χ+−〉, 〈χW , χ−+〉, and 〈χW , χ−−〉, and we see that 〈χW , χ−−〉 =1. Thus, W is the direct sum of the 1-dimensional representation with s, t 7→ −1 and a2-dimensional representation W ′.

Finally,

〈χW ′ , χW ′〉 = 〈χW − χ−−, χW − χ−−〉 = 〈χW , χW 〉 − 2〈χW , χ−−〉+ 〈χ−−, χ−−〉 = 1

so W ′ is irreducible.

27

Page 28: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

5.2.1 Character tables

A character table is a way of recording information about every irreducible character of agroup. Let χ1, . . . , χr be the irreducible characters of G, and let C1, . . . , Cs be the conjugacyclasses of G. We have seen that r ≤ s.

The character table for G is a table with columns indexed by C1, . . . , Cs and rows indexedby χ1, . . . , χr. For computational convenience, we will generally also add a line recording thesize of the conjugacy classes. Last week, we gave the example of the character table of D8:

{e} {s, s−1} {s2} {st, s−1t} {t, s2t}size of conjugacy class 1 2 1 2 2

χtriv(g) 1 1 1 1 1χ+−(g) 1 1 1 −1 −1χ−+(g) 1 −1 1 −1 1χ−−(g) 1 −1 1 +1 −1χ2(g) 2 0 −2 0 0

A very small example is the character table of C2 = 〈g : g2 = e〉. There are two conjugacyclasses and two characters, so we get

{e} {g}size of conjugacy class 1 1

χ1 1 1χ2 1 −1

Let’s give a bigger example of a character table, and compute the character table of S4.Recall that in a symmetric group, the conjugacy classes are indexed by the shape of thecycles. So the transpositions (1 2), (1 3), (1 4), (2 3), (2 4), (3 4) are all conjugate, the 3-cycles (1 2 3), (1 3 2), (1 2 4), (1 4 2), (1 3 4), (1 4 3), (2 3 4), (2 4 3) are all conjugate, and soon. We start off with the trivial character and the sign character:

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1

We need to find the rest of the irreducible characters of S4. To start with, consider thepermutation representation (Vperm, ρperm); this is a 4-dimensional representation with basis(b1, b2, b3, b4), and S4 acts by acting on the indices. That is, if σ ∈ S4, then ρperm(σ)(bi) = bσ·i.We can compute the character χperm:

χperm(e) = 4 χperm(1 2) = 2 χperm(1 2 3) = 1 χperm((1 2)(3 4)) = 0 χperm(1 2 3 4) = 0

Now observe that b1 + b2 + b3 + b4 ∈ Vperm is fixed by ρperm(σ) for every σ ∈ S4. Thus, thespan of b1+b2+b3+b4 is a 1-dimensional subrepresentation of Vperm, isomorphic to the trivial

28

Page 29: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

representation. Therefore, by Maschke’s theorem there is a complementary subrepresentationW ⊂ Vperm, i.e., W ⊂ Vperm is a subrepresentation, W ∩ 〈b1 + b2 + b3 + b4〉 = {0}, and themap W ⊕ 〈b1 + b2 + b3 + b4〉 → V (given by (w, v) 7→ w + v) is an isomorphism.

This implies that W is 3-dimensional. We can use character theory to prove that W isirreducible. Since taking direct sums of representations translates to adding their characters,we can compute χW . Namely, χW = χperm − χtriv, so

χW (e) = 3 χW (1 2) = 1 χW (1 2 3) = 0 χW ((1 2)(3 4)) = −1 χW (1 2 3 4) = −1

Recall that a representation (W, ρW ) is irreducible if and only if the inner product 〈χW , χW 〉 =1. We compute

〈χW , χW 〉 =1

24

(32 + 6 · 12 + 8 · 02 + 3 · (−1)2 + 6 · (−1)2

)= 1

so W is indeed irreducible.

We add χW to our table:

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1

There are more irreducible representations of S4, because 12 + 12 + 32 = 11 < 24. In fact,the squares of the dimensions of the remaining representations must add to 13, so we arelooking for a 2-dimensional representation and a 3-dimensional representation.

One way to construct another 3-dimension representation is to take the tensor productW ′ := Vsign ⊗W . This is 3-dimensional (because dimVsign = 1 and dimW = 3), and it isirreducible by an exercise on problem sheet #3. We can compute χW ′ :

χW ′ = χsign⊗W = χsign · χW

Thus, our table is now

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1χW ′ 3 −1 0 −1 1

Note that since χW ′ 6= χW , we know that W and W ′ are not isomorphic as representations.

29

Page 30: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Now we need to find the character of our mysterious 2-dimensional representation, which wewill call (U, ρU). We add its row to the character table:

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1χW ′ 3 −1 0 −1 1χU 2 ? ? ? ?

We know that U is the only irreducible 2-dimensional representation of S4 (for dimensionreasons). On the other hand Vsign ⊗ U is an irreducible 2-dimensional representation of S4;it follows that it must be isomorphic to U , and so we must have χsign · χU = χU . But that,in turn, implies that χU(1 2) = χU(1 2 3 4) = 0, so the table is now

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1χW ′ 3 −1 0 −1 1χU 2 0 ? ? 0

We also know that 〈χU , χV 〉 = 0 for any irreducible representation V not isomorphic to U .If we compute 〈χU , χW 〉, we get

〈χU , χW 〉 =1

24(2 · 3 + 6 · (0 · 1) + 8 · (χU(1 2 3) · 0) + 3 · (χU((1 2)(3 4)) · (−1)) + 6 · (0 · (−1)))

=1

24(6− 3χU((1 2)(3 4)))

Since this inner product must be 0, we see that χU((1 2)(3 4)) = 2, and our table is now

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1χW ′ 3 −1 0 −1 1χU 2 0 ? 2 0

Now we use 〈χU , χtriv〉 = 0:

〈χU , χtriv〉 =1

24(2 · 1 + 6 · (0 · 1) + 8 · (χU(1 2 3) · 1) + 3 · (2 · 1) + 6 · (0 · 1))

=1

24(8 + 8 · χU(1 2 3))

30

Page 31: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

This implies that χU(1 2 3) = −1, so we have completed our table:

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1χW ′ 3 −1 0 −1 1χU 2 0 −1 2 0

So we have shown that there is some irreducible 2-dimensional representation (U, ρU) of S4,and we know the traces of ρU(σ) for σ ∈ S4.

We can say a bit more about (U, ρU), though. Observe that χU((1 2)(3 4)) = 2 = χU(e).We showed earlier that for any character, χ(g) = χ(e) if and only if ρ(g) = ρ(e). LetN ⊂ S4 be the normal subgroup generated by pairs of transpositions (e.g. (1 2)(3 4)); thenwe have shown that ρU(σ) = ρU(e) for σ ∈ N . Thus, we can view the homomorphismρU : S4 → GL(U) as the composition

ρU : S4 � S4/N ∼= S3 → GLw(U)

Indeed, we have an inclusion S3 ↪→ S4 by thinking of a permutation of {1, 2, 3} as a per-mutation of {1, 2, 3, 4} which fixes 4. Composing with the projection S4 � S4/N , we geta homomorphism S3 → S4/N between groups of the same order. But since N ∩ S3 = {e}inside S4, this homomorphism is injective, so it is an isomorphism.

Thus, (U, ρU) can be viewed as a 2-dimensional representation of S3, and χU agrees withthe character of the irreducible 2-dimensional representation of S3. So actually, we alreadyknew about (U, ρU)!

This motivates the following definition:

Definition 5.21. Let N C G be a normal subgroup of a finite group G, and let (V , ρV )be a representation of the quotient G/N . The inflation of (V , ρV ) is the representation(V, ρV ) of G, where V = V as vector spaces, and ρV : G → GL(V ) is the composition

G� G/NρV−→ GL(V ). Equivalently, ρV (g) := ρV (gN).

Observe that the inflation V is irreducible as a representation of G if and only if V isirreducible as a representation of G/N .

5.2.2 Row and column orthogonality

Fact: Character tables are square, i.e., the number of irreducible representations of G isequal to the number of conjugacy classes of G.

We will prove this later on, but we assume it for now.

31

Page 32: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

We have already proved that the rows of a character table can be related to each other:

〈χi, χj〉 :=1

|G|∑g∈G

χi(g)χj(g) =

{0 if i 6= j

1 if i = j

We refer to this as row orthogonality, because it tells us that the rows of a character tableare mutually orthogonal.

We also have a result we refer to as column orthogonality :

Proposition 5.22. Let g ∈ G, and let C(g) denote the conjugacy class containing g. Thenfor any h ∈ G,

r∑i=1

χi(g)χi(h) =

{0 if h /∈ C(g)|G||C(g)| if h ∈ C(g)

Proof. Consider the class function δC(g) : G → C, defined by δC(g)(h) = 1 if h ∈ C(g) andδC(g)(h) = 0 otherwise. The set {δC(g)}g forms a basis for the space of class functions Ccl(G),where g ranges over representatives of each conjugacy class.

We proved earlier that {χi}ri=1 are orthonormal (with respect to the inner product on C(G)),and therefore linearly independent. Therefore, if we assume that character tables are square,{χi}i form another basis for Ccl(G). We may therefore write

δC(g) =r∑i=1

〈δC(g), χi〉χi

We can compute directly that 〈δC(g), χi〉 = |C(g)||G| χi(g), so

δC(g) =r∑i=1

|C(g)||G|

χi(g)χi

Evaluating both sides on h, we see that

r∑i=1

|C(g)||G|

χi(g)χi(h) =

{0 if h /∈ C(g)

1 if h ∈ C(g)

Then we may multiply both sides by |G||C(g)| to obtain the result.

Example 5.23. Take g = e in the previous proposition. Then it implies that

r∑i=1

χi(e)χi(e) =r∑i=1

(dimVi)2 = |G|

which is a familiar formula.

32

Page 33: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Example 5.24. We could have used column orthogonality to compute the last row of thecharacter table for S4:

{e} (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)size of conjugacy class 1 6 8 3 6

χtriv 1 1 1 1 1χsign 1 −1 1 1 −1χW 3 1 0 −1 −1χW ′ 3 −1 0 −1 1χU 2 ? ? ? ?

Then column orthogonality (applied to each column with itself) implies that χU(1 2) =χU(1 2 3 4) = 0, χU(1 2 3) = ±1, and χU((1 2)(3 4)) = ±2. Comparing the (1 2 3) columnwith the {e} column impies that χU(1 2 3) = −1, and comparing the (1 2)(3 4) column withthe (1 2 3) column implies that χU((1 2)(3 4)) = 2.

We can reformulate our row and column orthogonality relations a bit. Let the conjugacyclasses of G be C1, . . . , Cr, with representatives g1, . . . , gr, respectively. Now define an r × rmatrix B by setting

Bij :=

√|Cj||G|

χi(gj)

Proposition 5.25. The matrix B is unitary, i.e., B−1 = BT

.

Proof. It is enough to show that BBT

= 1r. Indeed, this implies that det(B) 6= 0 so B isinvertible. Now

(BBT

)ik =∑j

(√|Cj||G|

χi(gj) ·

√|Cj||G|

χk(gj)

)=

1

|G|∑j

|Cj|χi(gj)χk(gj) = 〈χi, χk〉

We can rephrase this as saying that {δC(g)} and {χi} both give bases for the space of classfunctions Ccl(G). But {χi} is orthonormal and {δC(g)} is orthogonal but not orthonormal,so B is the change-of-basis matrix between {χi} and a rescaled version of {δC(g)}.

5.2.3 Summary

Characters, and character tables, are a tool for assembling information about representationsof a group, in a way that’s convenient for computation.

We cannot recover a group from its character table. For example, D8 and the quaterniongroup Q8 are not isomorphic, but they have the same character table.

33

Page 34: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

However, we can find the center of the group: g ∈ G is in the center Z(G) if and only ifC(g) = {g}, which is the case if and only if

∑ri=1|χi(g)|2 = |G|.

We can also find all normal subgroups of G from the character table, so in particular, wecan determine whether or not G is simple.

Definition 5.26. Let χ : G→ C be a function. We define kerχ := {g ∈ G : χ(g) = χ(e)}.

We have proved that if (V, ρV ) is a representation and χV is the associated character, thenχV (g) = χV (e) if and only if ρV (g) = ρV (e) = 1V . Thus, kerχV = ker ρV . But ker ρV ⊂ Gis a normal subgroup of G, so G is simple if and only if kerχ = {e} for every character χ.

Proposition 5.27. Let G be a finite group and let H C G be a normal subgroup. Letχ1, . . . , χr be the irreducible characters of G.

1. There is a representation (V, ρV ) of G such that ker ρV = H.

2. There is a subset I ⊂ {1, . . . , r} such that H = ∩i∈I kerχi.

Proof. 1. Consider the regular representation (V reg, ρreg) of G/H, and let (V, ρV ) be theinflation to G. The regular representation is faithful, so ker ρV = H.

2. Let (V, ρV ) again be the inflation of the regular representation of G/H. Then V ∼=V ⊕mi1 ⊕· · ·⊕V mr

r , where the Vi are the irreducible representations of G and the mi arethe multiplicities. Let I := {i : mi > 0}. Then

H = ker ρV = ∩i∈I ker ρi = ∩i∈I kerχi

6 Algebras and modules

6.1 Algebras

Up to this point, we have viewed representations of groups as vector spaces together withlinear actions of groups. The vector space has an additive structure, and the group acts bymultiplication, but we have treated these additive and multiplicative structures separately.

An algebra is a structure that has both addition and multiplication. A basic example is theset of d× d matrices Matd(C) — it is a vector space because we can add matrices and scalethem by complex numbers, but we can also multiply matrices.

In other words, we have a map

m : Matd(C)×Matd(C)→ Matd(C)

(M,N) 7→M ·N

This map has the following important properties:

34

Page 35: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

1. m is bilinear: m(λ1M1 + λ2M2, N) = λ1m(M1, N) + λ2m(M2, N)

2. m is associative: m(m(L,M), N) = m(L,m(M,N)), or equivalently, (L ·M) · N =L · (M ·N)

3. m is unital: there is an element Id ∈ Matd(C) such that m(M, Id) = m(Id,M) = Mfor all M ∈ Matd(C)

We will define an algebra to be a complex vector space satisfying these properties:

Definition 6.1. An algebra is a vector space A equipped with a bilinear, associative, andunital multiplication map m : A× A→ A.

We will write ab or a · b for m(a, b).

We observe that the unit element of A is unique. Indeed, if 1A and 1′A are both unit elementsof A, then 1′A = 1A · 1′A = 1A.

Given an algebra A, we can define a map C → A via λ 7→ λ1A. This map is compatiblewith addition and multiplication (it is a ring homomorphism, if you have seen rings before).

Example 6.2. 1. Let A = C with the usual vector space structure and m given by theusual multiplication.

2. Let A = C⊕C, with multiplication given by

m((x1, y1), (x2, y2)) := (x1x2, y1y2)

3. Let A be the set of polynomials C[x], which is an infinite-dimensional vector space,with multiplication given the multiplication of polynomials.

4. Let A = C⊕C, with multiplication given by

m((x1, y1), (x2, y2)) := (x1x2, x1y2 + x2y1)

Alternatively, we can view A as a 2-dimensional complex vector space with basis {1, x},with multiplication given by 12 = 1, 1x = x1 = x, and x2 = 0 and extended bilinearly.The unit element is (1, 0).

5. Let V be a vector space and let A = Hom(V, V ). Multiplication is given by compositionof maps, i.e., fg := f ◦ g. If we pick a basis of V , A becomes isomorphic to Matd(C)for some d, and composition of maps becomes multiplication of matrices.

6. Suppose A and B are algebras. We can make the direct sum A⊕B into an algebra bydefining multiplication by

m((a1, b1), (a2, b2)) = (a1a2, b1b2)

The unit element is (1A, 1B).

35

Page 36: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

For us, the most important example will be the group algebra:

Let G be a finite group, and let C[G] denote the set of formal linear combinations of elementsof G:

C[G] := {λ1[g1] + . . .+ λs[gs] : λi ∈ C}

We define addition via

(λ1[g1] + . . .+ λs[gs]) + (µ1[g1] + . . .+ µs[gs]) := (λ1 + µ1)[g1] + . . .+ (λs + µ2)[gs]

and scaling viaµ · (λ1[g1] + . . .+ λs[gs]) := (µλ1[g1] + . . .+ µλs[gs]

so C[G] is a vector space. As a vector space, it is the same as the vector space Vreg that wehave seen while studying the regular representation.

However, C[G] also has a multiplication map, which comes from the multiplication mapon G. More specifically, we define m([g], [h]) := [gh] and extend this to a bilinear mapm : C[G]×C[G]→ C[G]. Explicitly,

(λ1[g1] + . . .+ λs[gs]) · (µ1[g1] + . . .+ µs[gs]) :=∑gk

∑gi,gj with gigj=gk

λiµj[gk]

This product is associative because multiplication in G is associative, and the unit elementis [e].

The group algebra C[G] is commutative if and only if G is an abelian group.

Example 6.3. Let G = C2 = 〈g : g2 = e〉. Then C[G] = {λ1[e] + λ2[g] : λi ∈ C} is a2-dimensional vector space, with multiplication

(λ1[e] + λ2[g]) · (µ1[e] + µ2[g]) = (λ1µ1 + λ2µ2)[e] + (λ1µ2 + λ2µ1)[g]

Definition 6.4. Let A and B be algebras. A linear map f : A→ B is an algebra homomor-phism if f(1A) = 1B and f(a1a2) = f(a1)f(a2). An algebra homomorphism f is an algebraisomorphism if f is an isomorphism of vector spaces.

Example 6.5. Consider the map f : C[C2] → C ⊕ C defined by [e] 7→ (1, 1) and [g] 7→(1,−1). This is clearly an isomorphism of vector spaces, since its image contains a basis of C⊕C and both vector spaces are 2-dimensional. To check that f is an algebra homomorphism,it is enough to check that if x, y ∈ C[C2] are basis elements, then f(xy) = f(x)f(y), sincemultiplication is bilinear on both sides. But

f([e][e]) = f([e]) = (1, 1) = (1, 1)(1, 1) = f([e])f([e])

f([e][g]) = f([g]) = (1,−1) = (1, 1)(1,−1) = f([e])f([g])

f([g][e]) = f([g]) = (1,−1) = (1,−1)(1, 1) = f([g])f([e])

f([g][g]) = f([e]) = (1, 1) = (1,−1)(1,−1) = f([g])f([g])

36

Page 37: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Recall that our algebras are not necessarily commutative.

Definition 6.6. Let A be an algebra with multiplication map m. The opposite algebra Aop

is the algebra with the same underlying vector space, and with multiplication given by

mop : Aop × Aop → Aop

(a, b) 7→ m(b, a)

The algebra axioms for A imply that Aop is also an algebra, and the unit element is thesame. It is clear that mop is the same as m if and only if A is commutative.

Proposition 6.7. Let G be a finite group. Then C[G] ∼= C[G]op, with an isomorphism givenby

I : C[G]→ C[G]

[g] 7→ [g−1]

Thus, even if A is non-commutative, it is possible for A and Aop to be abstractly isomorphic.

Proof. It is clear that I is an isomorphism of vector spaces, since it simply shuffles the givenbasis for C[G]. We need to check that I is an algebra homomorphism. For this, it sufficesto check that mop(I([g]), I([h])) = I(m([g], [h])) for all g, h ∈ G. But

mop(I([g]), I([h])) = mop([g−1], [h−1]) = m([h−1], [g−1]) = [h−1g−1]

andI(m([g], [h])) = I([gh]) = [(gh)−1] = [h−1g−1]

so both sides are equal.

6.2 Modules

Now we will generalize the definition of a representation.

Definition 6.8. Let A be an algebra. A (left) A-module is a vector space M together withan algebra homomorphism ρ : A→ Hom(M,M).

Equivalently, an A-module is a vector space M together with an action map A ×M → Msending (a,m) 7→ a ·m := ρ(a)(m) satisfying

1. a · (m+ n) = a ·m+ a · n

2. (a+ b) ·m = a ·m+ b ·m

3. (ab) ·m = a · (b ·m)

37

Page 38: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

4. 1A ·m = m

Proposition 6.9. Let G be a finite group. A C[G]-module is the same thing as a represen-tation of G.

Proof. We explain how to go between representations of G and C[G]-modules.

Suppose that ρ : C[G] → Hom(M,M) is the map making M into a C[G]-module. Thenwe can restrict ρ to G = {[g]}g∈G ⊂ C[G] to get a function ρM : G → Hom(M,M). Bythe definition of an algebra homomorphism, ρ([g][h]) = ρ([g]) ◦ ρ([h]) for any g, h ∈ G. Inparticular, if we take h = g−1, we see that

ρ([g]) ◦ ρ([g−1]) = ρ([e]) = 1M

Thus, each linear map ρ([g]) : M → M has an inverse, namely ρ([g−1]). Thus, ρM is afunction G→ GL(M), and we have seen that it is a group homomorphism. So (M,ρM) is agroup representation.

On the other hand, suppose that ρ : G→ GL(M) is a representation of G. We may extendρ linearly to C[G] to get a map ρ : C[G]→ Hom(M,M) given by

ρ(∑g

λg[g]) =∑g

λgρ(g)

This is a linear map, by construction, and it follows from the definition of multiplication inC[G] that ρ is an algebra homomorphism. Thus, M is a C[G] module.

Example 6.10. Let A be the algebra C. Then an A-module is a vector space M and ahomomorphism ρ : C→ Hom(M,M). We must have ρ(1) = 1M , and by linearity, we musthave ρ(λ) = λ1M for all λ ∈ C. Thus, there is a unique ρ making M into an A-module. Soa C-module is the same thing as a vector space.

Example 6.11. Let A = C ⊕ C. Recall that we showed that A ∼= C[C2]. Let us classify1-dimensional A-modules. In other words, if M is a 1-dimensional vector space, we need toclassify algebra homomorphisms

ρ : C⊕C→ Hom(M,M) = C

It suffices to specify ρ(1, 0) and ρ(0, 1), and we need these values to satisfy

(ρ(1, 0))2 = ρ((1, 0)2) = ρ(1, 0) (ρ(0, 1))2 = ρ((0, 1)2) = ρ(0, 1)

ρ(1, 0)ρ(0, 1) = ρ((1, 0)(0, 1)) = ρ(0, 0) = 0 ρ(0, 1) + ρ(1, 0) = ρ((1, 1)) = ρ(1A) = 1

Since 0 and 1 are the only complex numbers satisfying x2 = x, and we need the sum andproduct of ρ(1, 0) and ρ(0, 1) to be 1 and 0, respectively, we see that there are two solutions:ρ(1, 0) = 1 and ρ(0, 1) = 0, or ρ(0, 1) = 1 and ρ(1, 0) = 0. The fact that there are twosolutions corresponds to the fact that C2 has two isomorphism classes of representations.

38

Page 39: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Example 6.12. Recall that for any finite group G, we defined the the regular representation(Vreg, ρreg), and that the vector space Vreg is the same as the underlying vector space of C[G].Viewing Vreg as a C[G]-modules, the action map

C[G]× Vreg → Vreg

is the same as the multiplication map

C[G]×C[G]→ C[G]

We can extend this example a bit: for any algebra A, we can construct an A-module withunderlying vector space A by defining the structure map

ρ : A→ Hom(A,A)

a 7→ ma

where ma : A → A denotes the “multiplication by a” map sending b 7→ ab. So A is actingon itself by left multiplication.

6.3 Module homomorphisms

Definition 6.13. Let A be an algebra and let M,N be A-modules. A homomorphism ofA-modules (or an A-linear map) is a linear map f : M → N such that f(am) = af(m) forall a ∈ A, m ∈M .

We write HomA(M,N) := {f : M → N |f is A− linear}. It is a subset of Hom(M,N), andit is easy to check that it is a vector subspace. If M = N , it is actually a subalgebra, becausethe composition of two A-linear maps is A-linear.

Now we check that if A is a group algebra, then homomorphisms of modules are the sameas homomorphisms of representations

Proposition 6.14. Let G be a finite group. Let A = C[G] and let M and N be A-modules,corresponding to representations (M,ρM) and (N, ρN). Then HomA(M,N) = Hom(M,N)G.

Proof. Suppose f : M → N is A-linear. In particular, it is a linear transformation and weneed to check that it is G-linear. But

f(ρM(g)m) = f([g] ·m) = [g] · f(m) = ρN(g)(f(m))

for all g ∈ G, m ∈M , so this follows.

Conversely, suppose f : M → N is G-linear. Then f is linear and

f((∑g∈G

λg[g])m) = f(∑g∈G

(λg[g]·m)) = f(∑g∈G

λgρM(g)(m)) =∑g∈G

λgρN(g)(f(m)) = (∑g∈G

λg[g])·f(m)

so f is A-linear.

39

Page 40: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

6.4 Direct sums, submodules, and simple modules

Let A be an algebra and let M and N be A-modules. Then M ⊕N is also an A-module ifwe define the action

A× (M ⊕N)→M ⊕Na · (m,n) = (a ·m, a · n)

If A = C[G], then this corresponds to taking direct sums of representations.

There are inclusion maps iM : M → M ⊕ N , iN : N → M ⊕ N and projection mapspM : M ⊕N →M , pN : M ⊕N → N , and they are module homomorphisms.

Exercise 6.15. Let A be an algebra and let L,M,N be A-modules. Prove that HomA(L⊕M,N) ∼= HomA(L,N)⊕HomA(M,N) and HomA(L,M⊕N) ∼= HomA(L,M)⊕HomA(L,N).

We can also define submodules, which correspond to subrepresentations:

Definition 6.16. Let A be an algebra and let M be an A-module, with ρ : A→ Hom(M,M)the structure map. A submodule of M is a vector subspace M ′ ⊂M such that ρ(a)(m′) ∈M ′

for all a ∈ A, m′ ∈M ′.

So a submodule is a vector supspace which is also stable under the action of A. The restrictedaction of A defines a structure map ρ′ : A→ Hom(M ′,M ′).

Exercise 6.17. Let A be an algebra and let f : M → N be a homomorphism of A-modules.Then ker(f) ⊂M and im(f) ⊂ N are submodules of M and N , respectively.

We claim that if A = C[G], then a submodule M ′ ⊂ M is a subrepresentation of (M,ρM).Indeed,

ρM(g)(m′) = [g] ·m′ ∈M ′

for all g ∈ G and m′ ∈ M ′, so M ′ is preserved by ρM(g) for all g ∈ G. Conversely, supposeM ′ ⊂M is a subrepresentation. Then

(∑g∈G

λg[g]) ·m′ =∑g∈G

λgρM(g)(m′) ∈M ′

for all m′ ∈ M ′ and all λg ∈ C. Since every element a ∈ A can be written as a linearcombination of {[g]}g∈G, this shows that M ′ ⊂M is a submodule.

Definition 6.18. Let A be an algebra. An A-module M is said to be simple if it containsno proper non-zero submodules.

Definition 6.19. Let A be an algebra and let M be an A-module. If m ∈ M , then thesubmodule generated by m is the set A · m := {a · m|a ∈ A}. It is a submodule becauseb · (a ·m) = (ba) ·m.

40

Page 41: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Lemma 6.20. Suppose M is a simple A-module and suppose m ∈ M is non-zero. ThenA ·m = M . Conversely, suppose that for every non-zero element m ∈M , A ·m = M . ThenM is a simple A-module.

Proof. Since A · m ⊂ M is a submodule, it is either {0} or M . But 1A · m = m 6= 0, soA ·m 6= {0}, so A ·m = M .

Conversely, suppose we have a non-zero submodule M ′ ⊂ M . Then for some non-zerom′ ∈ M ′, A · m′ is a submodule of M ′, not just M . Therefore, A · m′ = M ⊂ M ′ andM ′ = M .

Example 6.21. Suppose M is 1-dimensional. Then M is simple because there are no propernon-zero vector subspaces.

Example 6.22. Suppose A = C[G] for some finite group G. Then an A-module M is simpleif and only if the representation (M,ρM) is irreducible.

Example 6.23. Let A = Matd(C) and let M = C⊕d. We make M into an A-module via theusual action of matrices on column vectors. Thus, we get a structure map ρ : Matd(C) →Hom(C⊕d,C⊕d) by setting ρ(P )(v) := P · v (this is actually an isomorphism!).

Then M is a simple A-module. Indeed, suppose M ′ ⊂ M is an A-submodule. If M ′ 6= {0},then after changing basis, we may assume that M ′ contains the first standard basis vectorv1. For each v ∈ C⊕d, there is a matrix Pv such that Pv · v1 = v. Thus, A · v1 = M ⊂ M ′.So the only non-zero submodule of C⊕d is C⊕d itself.

Remark 6.24. Suppose A is an algebra which finite-dimensional as a complex vector space,and suppose M is a simple A-module. Then for any m ∈M , A ·m = M , so the map

A→M

a 7→ a ·m

is a surjective linear map of vector spaces. So M is also finite-dimensional over C.

Example 6.25. Here is an example of submodules that do not come from direct sums. LetA be the 2-dimensional algebra C[x]/x2 with basis {1, x}, unit 1, and multiplication definedby x2 = 0.

Let M be A, considered as a module over itself. Then A · x ⊂ M is a non-zero submodule.But

(λ+ µx) · x = λx+ µx2 = λx

so as a vector subspace of M , A · x = 〈x〉.If M = A·x⊕M ′, then M ′ ⊂M must be a 1-dimensional subspace such that M ′∩A·x = {0}.In other words, as a vector space M ′ = 〈λ + µx〉 with λ 6= 0. But (λ′ + µ′x) · (λ + µx) =λ′λ+(λ′µ+λµ′)x — this is not an element of 〈λ+µx〉 unless µ′ = 0. So there is no equivalentof Maschke’s theorem for modules over a general algebra.

41

Page 42: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

We can classify all simple A-modules, though. Suppose M is a simple A-module. Since A iscommutative, x ·M := {x ·m|m ∈M} ⊂ is a submodule of M (since a · (x ·m) = x · (a ·m)).Since M is simple, either x · M = {0} or x · M = M . If the latter holds, then we mayconclude that

x2 ·M = x · (x ·M) = x ·M = M

But x2 = 0, so x2 ·M = M = {0}.Thus, if M 6= {0}, we must have x ·m = 0 for all m ∈ M , and the action of A is given by(λ+ µx) ·m = λ ·m. If dimM > 1, then M clearly has proper non-zero submodules. Thus,the simple A-modules are 1-dimensional vector spaces with x ∈ A acting as 0.

Now that we have defined submodules and simple modules, we can state a generalization ofSchur’s lemma:

Theorem 6.26. 1. Let A be an algebra and let M,N be simple A-modules. If f : M → Nis a homomorphism of A-modules, then either f = 0 or f is an isomorphism.

2. Suppose A is an algebra which is finite-dimensional as a complex vector space. Let Mbe a simple A-module. Then an A-linear map f : M →M is multiplication by a scalarλ ∈ C.

Proof. The proofs are virtually the same as for G-linear maps of irreducible representations:

1. ker(f) ⊂ M and im(f) ⊂ N are submodules of M and N respectively, and since Mand N are assumed simple, they are both either 0 or all of M or N .

2. Since A is finite-dimensional, so is M . Then f has an eigenvalue λ ∈ C, so λ1M − f :M →M is an A-linear map which is not an isomorphism. Since M is assumed simple,λ1M − f = 0, which implies f = λ1M .

Remark 6.27. There was an early exercise to describe Hom(V ⊕r, V ⊕r)G for an irreduciblerepresentation (V, ρV ) of G. We can now generalize this to state that for a finite-dimensionalalgebra A and a simple A-module M ,

HomA(M⊕r,M⊕r) ∼= Matr(C)

as algebras.

6.5 Semisimple modules and algebras

Definition 6.28. An A-module M is semisimple if it is isomorphic to the direct sum ofsimple A-modules.

42

Page 43: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Maschke’s theorem states that if A = C[G], then every A-module is semisimple. But ifA = C[x]/x2, then A is not semisimple as a module over itself.

Lemma 6.29. If M is a semisimple A-module, then its decomposition into simple modulesis unique, up to reordering.

Proof. The proof is identical to the proof of Theorem 3.1 — if M ′ is a simple A-module,then dim HomA(M ′,M) is the multiplicity of M ′ in any decomposition of M .

Lemma 6.30. Let A be an algebra and let M be an A-module. Suppose that there is anisomorphism α : M

∼−→ N1 ⊕ · · · ⊕ Nr, where the Ni are simple A-modules. If L ⊂ M is asubmodule, then there is a subset I ⊂ {1, . . . , r} such that α−1(⊕i∈INi) is a complementarysubmodule to L.

Proof. Let I ⊂ {1, . . . , r} be a maximal subset such that α−1(⊕i∈INi) ∩ L = {0}. To provethe lemma, it suffices to show that α−1(⊕i∈INi) and L together span M . Let X denote theirspan.

If j ∈ I, then α−1(Nj) ⊂ X, by construction. If j /∈ I, consider α−1(Nj) ∩ X. SinceNj is simple, either α−1(Nj) ∩ X = {0} or α−1(Nj) ∩ X = α−1(Nj). In the first case,α−1(Nj) ∩ L = {0}, contradicting the maximality of I. Therefore α−1(Nj) ∩ X = α−1(Nj)and α−1(Nj) ⊂ X. Thus, X ⊃M and we are done.

Proposition 6.31. Let A be an algebra and let M be a finite-dimensional semisimple A-module. Then any submodule M1 ⊂M is also semisimple.

Proof. We may write α : M∼−→ N1 ⊕ · · · ⊕ Nr, where the Ni are simple A-modules. Then

M1 has a complement M2 ⊂ M such that α|M2 : M2⊕−→i∈I Ni for some I ⊂ {1, . . . , r}.

Then there is a map β : M → ⊕i/∈INi with kernel M2. Since M1 ∩M2 = {0} by definition,β|M1 → ⊕i/∈INi is an isomorphism and M1 is semisimple.

Corollary 6.32. A finite-dimensional A-module M is semisimple if and only if every sub-module L ⊂M has a complementary submodule L′ ⊂M .

Proof. We have seen that if M is semisimple, then every submodule L ⊂ M has a comple-mentary submodule L′ ⊂M .

To show the converse, we proceed by induction on dimM . If dimM = 1, then M issimple and therefore semisimple. Suppose that dimM = d and we know the corollary ifdimM ≤ d − 1. If M is simple, we are done. If not, M has a proper non-zero submoduleM1 ⊂ M ; by passing to a smaller submodule of M1 if necessary, we may assume that M1

is simple. There is a complementary submodule M ′ ⊂ M such that M1 ⊕M ′ ∼−→ M , and0 < dimM ′ < dimM .

We claim that every submodule L ⊂ M ′ admits a complementary submodule L′ ⊂ M ′.Granting this, the inductive hypothesis implies that M ′ is the direct sum of simple A-modules. Since M ∼= M ′ ⊕M1, we are done.

43

Page 44: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Now we prove the claim. Recall that there are A-linear projection and inclusion mapsp1 : M → M1 and i1 : M1 → M . Now we observe that L ⊕ M1 ⊂ M , so there is acomplementary submodule N ⊂ M . We define L′ := {n− i1(p1(n)) : n ∈ N}, and we claimthat L′ is a complementary submodule of L ⊂M ′.

We check first that L′ ⊂M ′ and L′ is a submodule. But p1(n− i1(p1(n))) = 0, so L′ ⊂M ′.Furthermore, L′ is by definition the image of the map 1M − i1 ◦ p1 : N →M ′, and since 1M ,i1, and p1 are A-linear, so is (1M − i1 ◦ p1)|N , and its image is an A-module.

We next check that L′ ∩ L = {0}. But if m ∈ L′ ∩ L, then m = n − i1(p1(n)) for n ∈ N ;then i1(p1(n)) ∈ M1 and n ∈ N is the sum of an element of L and an element of M1. SinceN is the complement of L⊕M1, this implies that m = 0.

Finally, we check that the natural map L′ ⊕ L → M ′ is an isomorphism. This map isinjective, so it suffices to check that dimL′ + dimL = dimM ′. Since dimN + dimL =dimM−dimM1 = dimM ′, it further suffices to show that 1M− i1◦p1 : N →M ′ is injective,so that dimL′ = dimN . But if m = i1(p1(m)), then m ∈ M1. Since M1 ∩ N = {0}, therestriction of 1M − i1 ◦ p1 to N is injective and we are done.

Definition 6.33. Let A be an algebra. We say that A is semisimple if A is semisimple as amodule over itself.

Example 6.34. 1. Let A = C[G]. Then as a module over itself, A ∼= Vreg, which decom-poses as the direct sum of A-modules (corresponding to irreducible representations ofG).

2. Let A = Matd(C). Then if we consider matrix multiplication, left multiplication byP ∈ A preserves columns of matrices. Thus, if we let Ni denote the ith column vectors,A ∼= V1 ⊕ · · ·Vd as an A-module.

3. On the other hand, A = C[x]/x2 is not semisimple, because A · x ⊂ A has no comple-mentary submodule.

4. If A and B are semisimple algebras, then so is A⊕B.

Proposition 6.35. Let A be a finite-dimensional semisimple algebra. Then every finite-dimensional A-module M is semisimple.

Proof. Let m1, . . . ,mr be a basis for M . Then m1, . . . ,mr generate M as an A-module, inthe sense that the map

p : A⊕r →M

(a1, . . . , ar) 7→∑i

ai ·mi

is surjective. ButA⊕r is isomorphic to the direct sum of simpleA-modules, so it is semisimple.Therefore, ker(p) ⊂ A⊕r has a complementary submodule N ⊂ A⊕r, which is also semisimple.But then the restriction p|N : N →M is an isomorphism, and thus M is semisimple.

44

Page 45: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Theorem 6.36. Let A be a finite-dimensional semisimple algebra. Then there are finitelymany isomorphism classes of simple A-modules. If M1, . . . ,Mr is a complete list of non-isomorphic simple A-modules, then there is an isomorphism of A-modules A ∼= ⊕iMdimMi

i .

Proof. The proof is virtually identical to the decomposition of the regular representation(which is the case A = C[G] for some finite group G). One checks that if A ∼= ⊕iMdi

i , thendi = dim HomA(A,Mi) = dimMi.

The semisimple algebra Matd(C) is particularly simple. As a module over itself, it is thedirect sum of copies of C⊕d (viewed as the space of column vectors. Since every simpleMatd(C)-module appears as a summand of Matd(C), V := C⊕d is the only simple Matd(C)-module (up to isomorphism). In addition, we can define an equivalence between finite-dimensional complex vector spaces and finite-dimensional Matd(C)-modules.

If M is a finite-dimensional Matd(C)-module, it is semisimple and therefore isomorphic (asa module) to V ⊕s for some s. Then HomA(V,M) ∼= HomA(V, V )⊕s ∼= C⊕s.

On the other hand, if W is a finite-dimensional complex vector space we can make the tensorproduct V ⊗W into a Matd(C)-module by setting a · v ⊗ w = (a · v)⊗ w.

This is an example of a concept called Morita equivalence.

Our next goal is to prove that every finite-dimensional semisimple algebra A is isomorphicto a direct sum of matrix algebras.

Lemma 6.37. For any algebra A, Aop ∼= HomA(A,A).

Proof. If we view A as a left A-module over itself, right multiplication by a ∈ A is an A-linearmap

ma : A→ Ab 7→ ba

Moreover, every A-linear map A→ A arises in this way: Given f : A→ A which is A-linear,

f(b) = f(b · 1A) = b · f(1A)

so f is uniquely determined by its evaluation at 1A.

Thus, we have constructed an isomorphism of vector spaces Aop → HomA(A,A). We needto check that it is an algebra homomorphism. But

mop(a, a′) 7→ (ma′a : b 7→ ba′a) = ma ◦ma′

as desired.

Lemma 6.38. Let V be a finite-dimensional complex vector space. Then Hom(V, V )op isnaturally isomorphic to Hom(V ∗, V ∗).

45

Page 46: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Proof. Given f ∈ Hom(V, V ), recall that we have defined f ∗Hom(V ∗, V ∗) via f ∗(g) := g ◦ f ,where g ∈ V ∗. Then f 7→ f ∗ is an isomorphism of vector spaces, and since (f1 ◦ f2)∗(g) =g ◦ (f1 ◦ f2) = f ∗2 (f ∗1 (g)), (f1 ◦ f2)∗ = f ∗2 ◦ f ∗1 and it is an isomorphism of algebras.

In particular, this lemma implies that the opposite of a matrix algebra is a matrix algebra,though an isomorphism between the two depends on a choice of basis.

Theorem 6.39 (Artin–Wedderburn). Let A be a finite-dimensional semisimple algebra.Then A is isomorphic to a direct sum of matrix algebras.

Proof. Let Mi be a simple A-module. Then we have seen that A ∼= ⊕iM⊕ dimMii . But then

Aop ∼= HomA(A,A) ∼= ⊕i,j HomA(M⊕ dimMii ,M

⊕ dimMj

j ) ∼= ⊕i HomA(M⊕ dimMii ,M⊕ dimMi

i )

∼= ⊕i MatdimMi(C)

Since the opposite of a matrix algebra is a matrix algebra, this shows that A is isomorphicto the direct sum of matrix algebras.

In particular, group algebras are direct sums of matrix algebras. But our constructiondepended on a choice of decomposition of A into simple modules.

We can actually be a bit more precise. Let M1, . . . ,Mr be a complete list of non-isomorphicsimple A-modules, and let ρi : A→ Hom(Mi,Mi) be the structure map.

Corollary 6.40. Let ρ := ⊕iρi : A → ⊕i Hom(Mi,Mi) be the direct sum of the ρi. This isan isomorphism of algebras.

Proof. Both sides have the same dimension, namely∑

i(dimMi)2. Thus, to show it is an

algebra isomorphism, it suffices to show it is injective. If ρ(a) = 0 for a ∈ A, then ρi(a) = 0for all i. But since A is isomorphic (as a module) to a direct sum of copies of the Mi, thisimplies that multiplication by a is the zero map on A. In particular, a = a · 1A = 0.

Example 6.41. Consider A = C[C2]. We know there are two simple A-modules up toisomorphism, both 1-dimensional, corresponding to the two irreducible representations ofC2. If M1 corresponds to the trivial representation and M2 corresponds to the non-trivialrepresentation, then ρ1 : C[C2] → Hom(M1,M1) ∼= C is given by sending [g] 7→ 1 andρ2 : C[C2]→ Hom(M2,M2) ∼= C is given by sending [g] 7→ −1. The direct sum of these twomaps is

ρ : C[C2]→ C⊕C

1 7→ (1, 1)

[g] 7→ (1,−1)

which is exactly the isomorphism we wrote down at the beginning.

46

Page 47: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Example 6.42. Let A = C[S3]. There are three irreducible representations of S3: the trivialrepresentation Vtriv, the sign representation Vsign, and a 2-dimensional representation W . Ifwe choose an appropriate basis for W , the representation is given by

(1 2 3) 7→(ω 00 ω−1

)(2 3) 7→ ( 0 1

1 0 )

where ω = e2πi/3.

So the map ρ : C[S3]→ C⊕C⊕Mat2(C) is given by

(1 2 3) 7→ (1, 1,(ω 00 ω−1

))

(2 3) 7→ (1,−1, ( 0 11 0 ))

6.6 The center of an algebra

Definition 6.43. Let A be an algebra. The center of A is defined to be

Z(A) := {z ∈ A : az = za for all a ∈ A}

The center is a commutative subalgebra of A.

Proposition 6.44. Let V be a finite-dimensional vector space and let A := Hom(V, V ).Then Z(A) = {λ1V : λ ∈ C}.

Proof. If we choose a basis for V , A becomes a matrix algebra. Let eij be the matrix witha 1 in the (i, j) spot and 0s elsewhere. By considering the matrices which commute with allof the eij, the result follows.

Corollary 6.45. Let A be a finite-dimensional semisimple algebra. Then Z(A) is isomorphicas an algebra to C⊕r, where r is the number of isomorphism classes of simple A-modules.

Proof. We have seen that A is isomorphic to ⊕i Hom(Mi,Mi), where M1, . . . ,Mr is a com-plete list of non-isomorphic simple A-modules. It is clear that for any algebras B,B′,Z(B ⊕B′) ∼= Z(B)⊕ Z(B′), so Z(A) ∼= C⊕r.

Now we return to the setting of group algebras. If A = C[G] for a finite group G, thiscorollary shows that Z(A) ∼= C⊕r, where r is the number of isomorphism classes of irreduciblerepresentations of G.

Proposition 6.46. Let A = C[G]. Then Z(A) is spanned by the elements zC(g) :=∑

h∈C(g)[h],

where C(g) denotes the conjugacy class of g.

47

Page 48: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Proof. We may write an element a ∈ C[G] as a =∑

g∈G λg[g]. Then a ∈ Z(A) if and only if[g′]a = a[g′] for all g′ ∈ G. But this holds if and only if∑

g∈G

λg[g] =∑g∈G

λg[g′gg′

−1]⇔ λg = λg′−1gg′ for all g, g′ ∈ G

But this is equivalent toa = λg1zC(g1) + . . .+ λgrzC(gr)

where g1, . . . , gr are representatives of the conjugacy classes of G.

But now we have shown that dimZ(C[G]) is equal to the number of conjugacy classes of G.We can finally prove that character tables are square:

Corollary 6.47. Let G be a finite group. Then the number of isomorphism classes ofirreducible representations of G is equal to the number of conjugacy classes of G.

Proof. Both of these numbers are equal to the dimension of Z(C[G]):

We have seen that A ∼= ⊕ri=1 Hom(Mi,Mi), where the Mi run over distinct irreducible rep-resentations, so dimZ(C[G]) = r.

On the other hand, we have proved that {zC(g)} span Z(C[G]), so dimZ(C[G]) is equal tothe number of conjugacy classes of G.

7 Burnside’s Theorem

This material is not examinable. We would like to finally apply representation theory toprove something about finite groups.

Theorem 7.1 (Burnside). Let G be a finite group of size paqb where p, q are distinct primesand a, b ∈ N with a+ b ≥ 2.

Before we start proving this, we make some definitions about algebraic numbers.

Definition 7.2. Let α ∈ C. We say that α is an algebraic number if there is a polynomialp(x) ∈ Q[x] with rational coefficients such that p(α) = 0. A polynomial p(x) is said to bemonic if p(x) = xn + an−1x

n−1 + . . .+ a0. We say that α ∈ C is an algebraic integer if thereis a monic polynomial p(x) ∈ Z[x] with integer coefficients such that p(α) = 0.

If α ∈ C is an algebraic number, there is a monic polynomial p(x) ∈ Q[x] of minimal degreehaving α as a root; p(x) is unique and irreducible, and we call it the minimal polynomial ofα. If α ∈ C is an algebraic integer, its minimal polynomial has coefficients in Z. The rootsof p(x) are called the conjugates of α.

48

Page 49: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Example 7.3. 1. The minimal polynomial of i is x2 + 1, so i is actually an algebraicinteger. The only other conjugate of i is −i.

2. More generally, if ζ is an nth root of 1, then ζ is a root of xn−1 = 0 so ζ is an algebraicinteger. The minimal polynomial of ζ divides xn − 1, so all of the conjugates of ζ arealso nth roots of 1 (but not all nth roots of 1 are conjugates of ζ! the conjugates arethe other roots of the minimal polynomial)

3. If n ∈ Z, n 6= 0, then 1/n is an algebraic number (because it is a root of nx− 1), butnot an algebraic integer. In fact, if α = m/n ∈ Q with m and n coprime and n > 1,then α is a root of the polynomial nx−m, so x−m/n is its minimal polynomial. Sinceit doesn’t have integer coefficients, α is not an algebraic integer.

4. If α = 1+√5

2, then α is a root of x2 − x− 1, so it is an algebraic integer.

We will need a couple of facts about algebraic numbers:

• If α, β ∈ C are both algebraic numbers, then αβ and α+β are also algebraic numbers.Similarly, if α, β ∈ C are both algebraic integers, then so are αβ and α + β.

• If α and β are algebraic numbers, then the conjugates of α+ β are of the form α′+ β′,where α′ and β′ are conjugates of α and β, respectively. If r ∈ Q, then the conjugatesof rα are of the form rα′, where α′ is a conjugate of α.

Lemma 7.4. Let G be a finite group and let χ : G → C be a character of a representationof G. Then

1. For every g ∈ G, χ(g) is an algebraic integer

2. If 0 < |χ(g)/χ(e)| < 1, then χ(g)/χ(e) is not an algebraic integer.

3. If χ is an irreducible character and g ∈ G, then |C(g)|χ(g)χ(e)

is an algebraic integer.

Proof. 1. Recall that χ(g) is the sum of roots of 1. Indeed, if (V, ρV ) is a representationsuch that χ = χV , then for any g ∈ G, we can choose a basis of V so that ρV (g) = (λij)i,jis diagonal (this follows by restricting ρV to the abelian group 〈g〉 ⊂ G, and using thecorollary to Schur’s lemma that representations of abelian groups are direct sums of1-dimensional representations). Since g has finite order, so does ρV (g), and so there issome n ≥ 1 such that λnii = 1 for all i. Therefore, χ(g) is the sum of algebraic integers,so is an algebraic integer itself.

2. The conjugates of χ(g)/χ(e) all have the form (ζ ′1 + . . . + ζ ′d)/d, which also satisfies|(ζ ′1 + . . . + ζ ′d)/d| ≤ 1. If p(x) = xn + . . . + a0 ∈ Z[x] is the minimal polynomialof χ(g)/χ(e), then ±a0 is the product of χ(g)/χ(e) and all of its conjugates, and so|a0| < 1. But we assumed that p(x) has integer coefficients, so a0 = 0. Moreover, sincep(x) is a minimal polynomial, it is irreducible, so p(x) = x, and therefore χ(g)/χ(e) = 0,contradicting our assumption that 0 < |χ(g)/χ(e)|.

49

Page 50: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

3. Let (V, ρV ) be the irreducible representation such that χ = χV . Let z =∑

h∈C(g)[h] ∈C[G], so that z ∈ Z(C[G]) by 6.46. It follows that multiplication by z defines a modulehomomorphism mz : V → V ; by Schur’s lemma this map is multiplication by a scalarλz ∈ C.

Since mz : V → V is a linear transformation, it has a trace. On the one hand, thetrace is λz dimV = λzχ(e). On the other hand, mz =

∑h∈C(g) ρV (h), so Tr(mz) =∑

h∈C(g) χV (g) = |C(g)|χV (g), and λz = |C(g)|χ(g)χ(e)

.

We claim that λz is an algebraic integer. Indeed, we can view mz : C[G] → C[G]as a module homomorphism from C[G] to itself, and λz is an eigenvalue. But withrespect to the standard basis of C[G] (namely {[g]}g∈G), mz has integer coefficients.Therefore, its characteristic polynomial is monic with integer coefficients, and since thecharacteristic polynomial of a matrix kills eigenvalues of that matrix, λz is the root ofa monic polynomial with integer coefficients, and so is an algebraic integer.

Corollary 7.5. Let (V, ρV ) be an irreducible representation of G. Then dimV divides |G|.

Proof. We want to show that |G|/χ(e) is an integer; it is a rational number, so it suffices toprove it is an algebraic integer. By row orthogonality,∑

g∈G

χV (g)χV (g) = |G|

so ∑g∈G

χV (g)

χV (e)χV (g) =

|G|χ(e)

But the left side is equal to the sum of terms of the form |C(g)|χV (g)χV (e)

· χV (g); since these are

algebraic integers, so is |G|χ(e)

.

Theorem 7.6 (Burnside). Let G be a finite group which has a conjugacy class of size ps

where p is a prime and s ≥ 1. Then G is not simple.

Proof. Choose some g ∈ G with |C(g)| = ps. Since ps > 1, G is not abelian and g 6= e. Letχ1, . . . , χr be the irreducible characters of G (with χ1 the trivial character).

The column orthogonality relations (applied to e and g) imply that

1 +r∑i=2

χi(g)χi(e) = 0

sor∑i=2

χi(g)χi(e)/p = −1/p

50

Page 51: M3/4/5P12 Group Representation Theoryrbellovi/teaching/notes.pdf · M3/4/5P12 Group Representation Theory Rebecca Bellovin April 10, ... //sites.google.com/site/matthewtowers

Since −1/p is not an algebraic integer, there is some i such that χi(g)χi(e)/p is not analgebraic integer. In fact, since χi(g) is an algebraic integer, χi(e)/p is not, and in particular,p 6| χi(e) and χi(g) 6= 0.

Since χi(e) and |C(g)| are relatively prime, Bezout’s identity implies that there are integersa, b such that aχi(e) + b|C(g)| = 1. Therefore,

aχi(g) + b|C(g)|χi(g)

χi(e)=χi(g)

χi(e)

The left side is an algebraic integer, so the right side must be, as well. It follows that|χi(g)| = χi(e), and so if (Vi, ρi) is the irreducible representation corresponding to χi, thenρi(g) = λ · 1V .

Now let H = {h ∈ G : ρi(h) = λ · ρi(e) for λ ∈ C, λ 6= 0}. Then H ⊂ G is a normalsubgroup, and since g ∈ H and g 6= e, H 6= {e}.Assume G is simple. Then H = G and for every h ∈ G, ρV (h) is multiplication by a scalar.Since V is irreducible, this implies that dimV = 1. Now let H ′ ⊂ H = G be the kernel of ρi.Since χi 6= χ1, Vi is not the trivial representation and so H ′ 6= G. Since G is simple, H ′ = {e}and ρi is a faithful representation. But an injective homomorphism ρi : G→ GL1(C) to anabelian group implies that G is abelian, contradicting the assumption that it has a conjugacyclass with size bigger than 1. We conclude that G is not simple.

Now we can prove Burnside’s paqb theorem.

Proof. Recall that if |G| = pa with a ≥ 2, then G is not simple. Indeed, if G is abelian,the result is clear. If not, then the center Z(G) ⊂ G is a subgroup with p-power order;if G is simple and non-abelian, then Z(G) = {e}. But the sizes of the conjugacy classesof G add up to |G| and have p-power order (since for any g ∈ G, the centralizer of gZG(g) := {h ∈ G : hg = gh} ⊂ G is a subgroup and |C(g)| · |ZG(g)| = |G|); since {e} isa conjugacy class of size 1, and the sum of the sizes of the conjugacy classes is divisible byp, there must be at least p conjugacy classes of size 1. Thus, |Z(G)| ≥ p > 1. If we letH ⊂ Z(G) be a cyclic subgroup of order p, then {e} ( H ( G is a normal subgroup and soG is not simple.

Now we consider the general case. One of Sylow’s theorems implies that there is a subgroupP ⊂ G with |P | = pa. Choose g ∈ Z(P ); since Z(P ) 6= {e} by the previous paragraph,we may choose g 6= e. Then |C(g)| · |ZG(g)| = |G|. Since P ⊂ ZG(g), pa | |ZG(g) and so|C(g)| = qr for some r ≥ 0.

If r = 0, then g ∈ Z(G) so Z(G) 6= {e} and we can find a normal subgroup {e} ( H ( Gwith H ⊂ Z(G), as above.

If r ≥ 1, Theorem 7.6 implies that G is not simple.

51