488
Andreas Bernhard Zeidler Abstract Algebra Rings, Modules, Polynomials, Ring Extensions, Categorical and Commutative Algebra February 15, 2012 (488 pages) If you have read this text I would like to invite you to contribute to it: Comments, corrections and suggestions are very much appreciated, at [email protected], or visit my homepage at www.mathematik.uni-tuebingen.de/ab/algebra/index.html This book is dedicated to the entire mathematical society. To all those who contribute to mathematics and keep it alive by teaching it.

Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

  • Upload
    vocong

  • View
    222

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Andreas Bernhard Zeidler

Abstract Algebra

Rings, Modules, Polynomials, Ring Extensions,

Categorical and Commutative Algebra

February 15, 2012 (488 pages)

If you have read this text I would like to invite you to contribute to it:Comments, corrections and suggestions are very much appreciated, at

[email protected], or visit my homepage at

www.mathematik.uni-tuebingen.de/ab/algebra/index.html

This book is dedicated to the entire mathematical society.To all those who contribute to mathematics and keep it alive by teaching it.

Page 2: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Contents

0 Prelude 5

0.1 About this Book . . . . . . . . . . . . . . . . . . . . . . . . . 5

0.2 Notation and Symbols . . . . . . . . . . . . . . . . . . . . . . 9

0.3 Mathematicians at a Glance . . . . . . . . . . . . . . . . . . . 15

1 Groups and Rings 19

1.1 Defining Groups . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.2 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.3 Defining Rings . . . . . . . . . . . . . . . . . . . . . . . . . . 32

1.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

1.5 First Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 46

1.6 Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

1.7 Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . 65

1.8 Ordered Rings . . . . . . . . . . . . . . . . . . . . . . . . . . 72

1.9 Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

2 Commutative Rings 83

2.1 Maximal Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . 83

2.2 Prime Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

2.3 Radical Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

2.4 Noetherian Rings . . . . . . . . . . . . . . . . . . . . . . . . . 97

2.5 Unique Factorisation Domains . . . . . . . . . . . . . . . . . . 103

2.6 Principal Ideal Domains . . . . . . . . . . . . . . . . . . . . . 114

2.7 Lasker-Noether Decomposition . . . . . . . . . . . . . . . . . 124

2.8 Finite Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

2.9 Localisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

2.10 Local Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

2.11 Dedekind Domains . . . . . . . . . . . . . . . . . . . . . . . . 153

3 Modules 159

3.1 Defining Modules . . . . . . . . . . . . . . . . . . . . . . . . . 159

3.2 First Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 170

3.3 Direct Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

3.4 Ideal-induced Modules . . . . . . . . . . . . . . . . . . . . . . 182

3.5 Block Decompositions . . . . . . . . . . . . . . . . . . . . . . 186

3.6 Dependence Relations . . . . . . . . . . . . . . . . . . . . . . 187

3.7 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . 191

3.8 Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . 197

3.9 Isomorphism Theorems . . . . . . . . . . . . . . . . . . . . . 203

3.10 Rank of Modules . . . . . . . . . . . . . . . . . . . . . . . . . 207

Page 3: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.11 Noetherian Modules . . . . . . . . . . . . . . . . . . . . . . . 213

3.12 Localisation of Modules . . . . . . . . . . . . . . . . . . . . . 221

4 Linear Algebra 224

4.1 Matix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 224

4.2 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . 232

4.3 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . 238

4.4 Multilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . 252

4.5 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

4.6 Rank of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 252

4.7 Canonical Forms . . . . . . . . . . . . . . . . . . . . . . . . . 252

5 Spectral Theory 253

5.1 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

5.2 Banach Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 253

5.3 Operatoralgebras . . . . . . . . . . . . . . . . . . . . . . . . . 253

5.4 Diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . 253

5.5 Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 253

6 Structure Theorems 254

6.1 Associated Primes . . . . . . . . . . . . . . . . . . . . . . . . 254

6.2 Primary Decomposition . . . . . . . . . . . . . . . . . . . . . 254

6.3 The Theorem of Prufer . . . . . . . . . . . . . . . . . . . . . . 254

6.4 Length of Modules . . . . . . . . . . . . . . . . . . . . . . . . 254

7 Polynomial Rings 255

7.1 Monomial Orders . . . . . . . . . . . . . . . . . . . . . . . . . 255

7.2 Graded Algebras . . . . . . . . . . . . . . . . . . . . . . . . . 261

7.3 Defining Polynomials . . . . . . . . . . . . . . . . . . . . . . . 268

7.4 The Standard Cases . . . . . . . . . . . . . . . . . . . . . . . 268

7.5 Roots of Polynomials . . . . . . . . . . . . . . . . . . . . . . . 268

7.6 Derivation of Polynomials . . . . . . . . . . . . . . . . . . . . 268

7.7 Algebraic Properties . . . . . . . . . . . . . . . . . . . . . . . 268

7.8 Grobner Bases . . . . . . . . . . . . . . . . . . . . . . . . . . 268

7.9 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

8 Polynomials in One Variable 269

8.1 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

8.2 Irreducibility Tests . . . . . . . . . . . . . . . . . . . . . . . . 269

8.3 Symmetric Polynomials . . . . . . . . . . . . . . . . . . . . . 269

8.4 The Resultant . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

8.5 The Discriminant . . . . . . . . . . . . . . . . . . . . . . . . . 269

8.6 Polynomials of Low Degree . . . . . . . . . . . . . . . . . . . 269

8.7 Polynomials of High Degree . . . . . . . . . . . . . . . . . . . 269

8.8 Fundamental Theorem of Algebra . . . . . . . . . . . . . . . . 269

9 Group Theory 270

Page 4: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

10 Multilinear Algebra 27610.1 Multilinear Maps . . . . . . . . . . . . . . . . . . . . . . . . . 27610.2 Duality Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 27610.3 Tensor Product of Modules . . . . . . . . . . . . . . . . . . . 27610.4 Tensor Product of Algebras . . . . . . . . . . . . . . . . . . . 27610.5 Tensor Product of Maps . . . . . . . . . . . . . . . . . . . . . 27610.6 Differentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276

11 Categorial Algebra 27711.1 Sets and Classes . . . . . . . . . . . . . . . . . . . . . . . . . 27711.2 Categories and Functors . . . . . . . . . . . . . . . . . . . . . 27711.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27811.4 Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27811.5 Abelian Categories . . . . . . . . . . . . . . . . . . . . . . . . 27811.6 Homology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278

12 Ring Extensions 279

13 Galois Theory 280

14 Graded Rings 281

15 Valuations 282

16 Proofs - Fundamentals 284

17 Proofs - Rings and Modules 306

18 Proofs - Commutative Algebra 413

Page 5: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 0

Prelude

0.1 About this Book

The Aim of this BookMathematics knows two directions - analysis and algebra - and any math-ematical discipline can be weighted how analytical resp. algebraical it is.Analysis is characterized by having a notion of convergence that allows toapproximate solutions (and reach them in the limit). Algebra is character-ized by having no convergence and hence allowing finite computations only.This book now is meant to be a thorough introduction into algebra.

Likewise every textbook on mathematics is drawn between two pairs ofextremes: (easy understandability versus great generality) and (complete-ness versus having a clear line of thought). Among these contrary poles weusually chose (generality over understandability) and (completeness over aclear red line). Nevertheless we try to reach understandability by being veryprecise and accurate and including many remarks and examples.

At last some personal philosophy: a perfect proof is like a perfect gem -unbreakably hard, spotlessly clear, flawlessly cut and beautifully displayed.In this book we are trying to collect such gemstones. And we are proud toclaim, that we are honest about where a proof is due and present completeproofs of almost every claim contained herein (which makes this textbookvery different from most others).

This Book is Written formany different kinds of mathematicians: primarily is meant to be a sourceof reference for intermediate to advanced students, who have already hada first contact with algebra and now closely examine some topic for theirseminars, lectures or own thesis. But because of its great generality andcompleteness it is also suited as an encyclopaedia for professors who pre-pare their lectures and researchers who need to estimate how far a certainmethod carries. Frankly this book is not perfectly suited to be a monographfor novices to mathematics. So if you are one we think you can greatly profitfrom this book, but you will probably have to consult additional monographs(at a more introductory level) to help you understand this text.

5

Page 6: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

PrerequisitesWe take for granted, that the reader is familiar with the basic notions ofnaive logic (statements, implication, proof by contradiction, usage of quanti-fiers, . . . ) and naive set theory (Cantor’s notion of a set, functions, partiallyordered sets, equivalence relations, Zorn’s Lemma, . . . ). We will present ashort introduction to classes and the NBG axioms when it comes to cate-gory theory. Further we require some basic knowledge of integers (includingproofs by induction) and how to express them in decimal numbers. We willsometimes use the field of real numbers, as they are most probably well-known to the reader, but they are not required to understand this text.Aside from these prerequisites we will start from scratch.

Topics CoveredWe start by introducing groups and rings, immediately specializing on rings.Of general ring theory we will introduce the basic notions only, e.g. theisomorphism theorems. Then we will turn our attention to commutativerings, which will be the first major topic of this book: we closely studymaximal ideals, prime ideals, intersections of such (radical ideals) and therelations to localisation. Further we will study rings with chain conditions(noetherian and artinian rings) including the Lasker-Noether theorem. Thiswill lead to standard topics like the fundamental theorem of arithmetic. Andwe conclude commutative ring theory by studying discrete valuation rings,Dedekind domains and Krull rings.

Then we will turn our attention to modules, including rank, dimensionand length. We will see that modules are a natural and powerful general-isation of ideals and large parts of ring theory generalises to this setting,e.g. localisation and primary decomposition. Module theory naturally leadsto linear algebra, i.e. the theory of matrix representations of a homomor-phism of modules. Applying the structure theorems of modules (the theo-rem of Prufer to be precise) we will treat canonical form theory (e.g. Jordannormal form).

Next we will study polynomials from top down: that is we introducegeneral polynomial rings (also known as group algebras) and graded alge-bras. Only then we will regard the more classical problems of polynomialsin one variable and their solvability. Finally we will regard polynomials inseveral variables again. Using Grobner bases it is possible to solve abstractalgebraic questions by purely computational means.

Then we will return to group theory: most textbooks begin with thistopic, but we chose not to. Even though group theory seems to be elementaryand fundamental this is not quite true. In fact it heavily relies on argumentslike divisibility and prime decomposition in the integers, topics that arenative to ring theory. And commutative groups are best treated from thepoint of view of module theory. Never the less you might as well skip theprevious sections and start with group theory right away. We will presentthe standard topics: the isomorphism theorems, group actions (includingthe formula of Burnside), the theorems of Sylow and lastly the p-q-theorem.However we are aiming directly for the representation theory of finite groups.

The first part is concluded by presenting a thorough introduction to whatis called multi-linear algebra. We will study dual pairings, tensor productsof modules (and algebras) over a commutative base ring, derivations andthe module of differentials.

6 0 Prelude

Page 7: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Thus we have gathered a whole bunch of separate theories - and it is time fora second structurisation (the first structurisation being algebra itself). Wewill introduce the notions of categories, functors, equivalence of categories,(co-)products and so on. Categories are merely a manner of speaking -nothing that can be done with category theory could not have been achievedwithout. Yet the language of categories presents a unifying concept for allthe different branches of mathematics, extending far beyond algebra. So wewill first recollect which part of the theory we have established is what inthe categorical setting. The categorial language is the right setting to treatchain complexes, especially exact sequences of modules. Finally we willpresent the basics of abelian categories as a unifying concept of all thoseseparate theories.

We will then aim for some more specialised topics: At first we will studyring extensions and the dimension theory of commutative rings. A specialcase are field extensions including the beautiful topic of Galois theory. After-wards we turn our attention to filtrations, completions, zeta-functions andthe Hilbert-Samuel polynomial. Finally we will venture deeper into numbertheory: studying the theory of valuations up to the theorem of Riemann-Roche for number fields.

Topics not CoveredThere are many instances where, dropping a finiteness condition, one has tointroduce some topology in order to pursue the theory further. Examplesare: linear algebra on infinite dimensional vector-spaces, representation the-ory of infinite groups and Galois theory of infinite field extensions. Anothernatural extension would be to introduce the Zariski topology on the spec-trum of a ring, which would lead to the theory of schemes directly. Yet thescope of this text is purely algebraic and hence we will usually stop at thepoint where topology sets in (but give hints for further readings).

The Two PartsMathematics has a peculiarity to it: there are problems (and answers) thatare easy to understand but hard to prove. The most famous example isFermat’s Last Theorem - the statement (for any n ≥ 3 there are no non-trivial integers (a, b, c) ∈ Z3 that satisfy the equation an + bn = cn) canbe understood by anyone. Yet the proof is extremely hard to provide. Ofcourse this theorem has no value of its own (it is the proof that containsdeep insights into the structure of mathematics), but this is no general rule.E.g. the theorem of Wedderburn (every finite skew-field is a field) is easy anduseful, but its proof will be performed using a beautiful trick-computation(requiring the more advanced method of cyclotomic polynomials).

Thus we have chosen an unusual approach: we have separated the truth(i.e. definitions, examples and theorems) from their proofs. This enables usto present the truth in a stringent way, that allows the reader to get a feelfor the mathematical objects displayed. Most of the proofs could have beengiven right away, but in several cases the proof of a statement can only bedone after we have developed the theory further. Thus the sequel of theo-rems may (and will) be different from the order in which they are proved.Hence the two parts.

0.1 About this Book 7

Page 8: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Our Best AdviceIt is a well-known fact, that some proofs are just computational and onlycontain little (or even no) insight into the structure of mathematics. Othersare brilliant, outstanding insights that are of no lesser importance than thetheorem itself. Thus we have already included remarks of how the proofworks in the first part of this book. And our best advice is to read a sectionentirely to get a feel for the objects involved - only then have a look at theproofs that have been recommended. Ignore the other proofs, unless youhave to know about them, for some reason. At several occasions this textcontains the symbols (♦) and () . These are meant to guide the reader inthe following ways:

(♦) As we have assorted the topics covered thematically (paying little at-tention to the sequel of proofs) it might happen that a certain exampleor theorem is far beyond the scope of the theory presented so far. Inthis case the reader is asked to read over it lightly (or even skip itentirely) and return to it later (after he has gained some more experi-ence).

() On some very rare occasions we will append a theorem without givinga proof (if the proof is beyond the scope of this text). Such an instancewill be marked by the black box symbol. In this case we will alwaysgive a complete reference of the most readable proof the author isaware of. And this symbol will be hereditarily, that is once we usea theorem that has not been proved any other proof relying on theunproved statement will also be branded by the black box symbol.

8 0 Prelude

Page 9: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

0.2 Notation and Symbols

ConventionsWe now wish to include a set of the frequently used symbols, conventionsand notations. In particular we clarify the several domains of numbers.

• First of all we employ the nice convention (introduced by Halmos) towrite iff as an abbreviation for if and only if.

• We denote the set of natural numbers - i.e. the positive integers in-cluding zero - by N := 0, 1, 2, 3, . . . . Further for any two integersa, b ∈ Z we denote the interval of integer numbers ranging from a tob by a . . . b := k ∈ Z | a ≤ k ≤ b .

• We will denote the set of integers by Z = N∪ (−N), and the rationalsby Q = a/b | a, b ∈ Z, b 6= 0 . Whereas Z will be taken for granted,Q will be introduced as the quotient field of Z.

• The reals will be denoted by R and we will present an example of howthey can be defined (without proving their properties however). Thecomplex numbers will be denoted by C = a+ ib | a, b ∈ R and wewill present several ways of constructing them.

• (♦) We will sometimes use the Kronecker-Symbol δ(a, b) (in the liter-ature this is also denoted by δa,b), which is defined to be

δ(a, b) = δa,b :=

1 if a = b0 if a 6= b

In most cases a and b ∈ Z will be integers and 0, 1 ∈ Z will be integers,too. But in general we are given some ring (R,+, ·) and a, b ∈ R. Thenthe elements 0 and 1 ∈ R on the right hand side are taken to be thezero-element 0 and unit-element 1 of R again.

• We will write A ⊆ X to indicate that A is a subset of X and A ⊂ Xwill denote strict inclusion (i.e. A ⊆ X and there is some x ∈ X withx 6∈ A). For any set X we denote its power set (i.e. the set of allits subsets) by P(X) := A | A ⊆ X . And for a subset A ⊆ X wedenote the complement of A in X by CA := X \A.

• Listing several elements x1, . . . , xn ∈ X of some set X, we do not re-quire these xi to be pairwise distinct (e.g. x1 = x2 might well happen).Yet if we only give explicit names xi to the elements of some previ-ously given subset A = x1, . . . , xn ⊆ X we already consider the xito be pairwise distinct (that is xi = xj implies i = j). Note that if thexi (not the set x1, . . . , xn ) have been given, then x1, . . . , xn mayhence contain fewer than n elements!

• Given an arbitrary set of sets A one defines the grand union⋃A and

the grand intersection⋂A to be the set consisting of all elements a

that are contained in one (resp. all) of the sets A ∈ A, formally⋃A := a | ∃A ∈ A : a ∈ A ⋂A := a | ∀A ∈ A : a ∈ A

0.2 Notation and Symbols 9

Page 10: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Note that⋂A only is a well-defined set, if A 6= ∅ is non-empty. A well-

known special case of this is the following: consider any two sets A andB and let A := A,B . Then A ∪ B =

⋃A and A ∩ B =

⋂A. This

notion is just a generalisation of the ordinary union and intersectionto arbitrary collections A of sets.

• If X and Y are any sets then we will denote the set of all functionsfrom X to Y by F(X,Y ) = Y X = f | f : X → Y . And for anysuch function f : X → Y : x 7→ f(x) we will denote its graph (notethat from the set-theoretical point of view f is its graph) by

Γ(f) := (x, f(x)) | x ∈ X ⊆ X × Y

• Once we have defined functions, it is easy to define arbitrary cartesianproducts. That is let I 6= ∅ be any non-empty set and for any i ∈ Ilet Xi be another set. Let us denote the union of all the Xi by X

X :=⋃i∈I

Xi = x | ∃ i ∈ I : x ∈ Xi

Then the cartesian product of the Xi consists of all the functionsx : I → X such that for any i ∈ I we have xi := x(i) ∈ Xi. Note thatthereby it is customary to write (xi) in place of x. Formally∏

i∈IXi := x : I → X : i 7→ xi | ∀ i ∈ I : xi ∈ Xi

• If I 6= ∅ is any index set and Ai ⊆ X is a non-empty Ai 6= ∅ subsetof X (where i ∈ I) then the axiom of choice states that there isa function a : I → X such that for any i ∈ I we get a(i) ∈ Ai. Inother words the product of non-empty sets is non-empty again. Itis a remarkable fact that this seemingly trivial property has far-flungconsequences, the lemma of Zorn being the most remarkable one.∏

i∈IXi 6= ∅ ⇐⇒ ∀ i ∈ I : Xi 6= ∅

• Let X 6= ∅ be a non-empty set, then a subset of the form R ⊆ X ×Xsaid to be a relation on X. And in this case it is customary to writexRy instead of (x, y) ∈ R. This notation will be used primarily forpartial orders and equivalence relations (see below).

• Consider any non-empty set X 6= ∅ again. Then a relation ∼ on X issaid to be an equivalence relation on X, iff it is reflexive, symmetricand transitive. Formally that is for any x, y and z ∈ X we get

x = y =⇒ x ∼ yx ∼ y =⇒ y ∼ x

x ∼ y, y ∼ z =⇒ x ∼ z

10 0 Prelude

Page 11: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And in this case we define the equivalence class [x] of x to be the setof all y ∈ X being equivalent to x, that is [x] := y ∈ X | x ∼ y . Andthe set of all equivalence classes is denoted by X/∼ := [x] | x ∈ X .Example: if f : X → Y is any function then we obtain an equivalencerelation ∼ on X by letting x ∼ y :⇐⇒ f(x) = f(y). Then theequivalence class of x ∈ X is just the fibre [x] = f−1(f(x)).

• Consider a non-empty set X 6= ∅ once more. Then a family of subsetsP ⊆ P(X) is said to be a partition of X, iff for any P , Q ∈ P weobtain the statements

X =⋃P

P 6= ∅P 6= Q =⇒ P ∩Q = ∅

Example: if ∼ is an equivalence relation on X, then X/∼ is a partitionofX. Conversely if P is a partition ofX, then we obtain an equivalencerelation ∼ on X by letting x ∼ y :⇐⇒ ∃P ∈ P : x ∈ P and y ∈ P .Hence there is a one-to-one correspondence between the equivalencerelations on X and the partitions of X given by ∼ 7→ X/∼.

• Consider any non-empty set I 6= ∅, then a relation ≤ on I is said to bea partial order on I, iff it is reflexive, transitive and anti-symmetric.Formally that is for any i, j and k ∈ I we get

i = j =⇒ i ≤ ji ≤ j, j ≤ k =⇒ i ≤ ki ≤ j, j ≤ i =⇒ i = j

And ≤ is said to be a total or linear order iff for any i, j ∈ I wealso get i ≤ j or j ≤ i (that is any two elements contained in I canbe compared). Example: for any set X the inclusion relation ⊆ is apartial (but not linear) order on P(X). If now ≤ is a linear order onI, then we define the minimum and maximum of i, j ∈ I to be

i ∧ j :=

i if i ≤ jj if j ≤ i i ∨ j :=

j if i ≤ ji if j ≤ i

• Now consider a partial order ≤ on the set X and a subset A ⊆ X.Then we define the set A∗ of minimal respectively the set A∗ ofmaximal elements of A to be the following

A∗ := a∗ ∈ A | ∀ a ∈ A : a ≤ a∗ =⇒ a = a∗ A∗ := a∗ ∈ A | ∀ a ∈ A : a∗ ≤ a =⇒ a = a∗

And an element a∗ ∈ A∗ is said to be a minimal element of A. Likewisea∗ ∈ A∗ is said to be a maximal element of A. Note that in generalit may happen that A has several minimal (or maximal) elements oreven none at all (e.g. N∗ = 0 and N∗ = ∅). For a linear orderminimal (and maximal) elements are unique however.

0.2 Notation and Symbols 11

Page 12: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Finally ≤ is said to be a well-ordering on the set X, iff ≤ is a linearorder on X such that any non-empty subset A ⊆ X has a (alreadyunique) minimal element. Formally that is

∀ ∅ 6= A ⊆ X ∃ a∗ ∈ A such that ∀ a ∈ A : a∗ ≤ a

• Let X be any set, then the cardinality of X is defined to be the classof all sets that correspond bijectively to X. Formally that is

|X| := Y | ∃ω : X → Y bijective

Note that most textbooks on set theory define the cardinality to be acertain representative of our |X| here. However the exact definition isof no importance to us, what matters is comparing cardinalities: wedefine the following relation ≤ between cardinals:

|X| ≤ |Y | :⇐⇒ ∃ ι : X → Y injective

⇐⇒ ∃π : Y → X surjective

|X| = |Y | :⇐⇒ ∃ω : X → Y bijective

⇐⇒ |X| ≤ |Y | and |Y | ≤ |X|

Note that the first equivalence can be proved (as a standard exercise)using equivalence relations and a choice function (axiom of choice).The second equivalence is a rather non-trivial statement called theequivalence theorem of Bernstein. However these equivalences grantthat ≤ has the properties of a partial order, i.e. reflexivity, transitivityand anti-symmetry.

• Suppose x1, x2, . . . , xn are pairwise distinct elements. Then the setX := x1, . . . , xn clealy has n elements. Using cardialities this wouldbe expressed, as

| x1, . . . , xn | = |1 . . . n|

In this case 1 . . . n ←→ X : i 7→ xi could be used as a bijection. Yetthis is a bit cumbersome and hence we introduce another notation - ifx1, x2, . . . , xn are pairwise distinct (that is xi = xj =⇒ i = j) we let

# x1, . . . , xn := n

In particular #∅ = 0. Hence for any finite set X we have defined thenumber of elements #X. And for infinite sets we simply let

#X := ∞

• Thereby a set X is said to be finite iff there is some n ∈ N such thatX ←→ (1 . . . n), that is iff #X = n. And we usually denote the setof all finite subsets of X by

Ω(X) := A ⊆ X | #A <∞

12 0 Prelude

Page 13: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Likewise X is said to be infinite, iff N can be embedded into X,formally N → X or in other words |N| ≤ |X|. Note that thereby weobtain a rather astounding equivalency of the statements

(a) X is infinite

(b) |X| = |X ×X|(c) |X| = |Ω(X)|

• If X and Y are finite, disjoint sets, then clearly #(X ∪ Y ) = (#X) +(#Y ). Likewise for any finite sets X and Y we have #(X × Y ) =(#X) · (#Y ), #P(X) = 2#X and #F(X,Y ) = (#Y )#X . We will usethese properties to extend the ordinary arithmetic of natural numbersto the arithmetic of cardinal numbers, by defining

|X|+ |Y | := |X t Y ||X| · |Y | := |X × Y |

2|X| := |P(X)||Y ||X| := |F(X,Y )|

for arbitrary sets X and Y . Note that t refers to the disjoint unionof X any Y , that is X t Y := ( 1 × X) ∪ ( 2 × Y ). This is notnecessary, if X and Y are disjoint, in this case we could have takenX ∪ Y . However in general we have to provide a way of seperating Xand Y and this nicely performed by X t Y . It is a non-surprising butalso non-trivial fact, that the power set of X is truly larger that X

|X| < |P(X)|

• (♦) We will introduce and use several different notions of substruc-tures and isomorphy. In order to avoid eventual misconceptions, weemphasise the kinds of structures regarded by applying subscripts tothe symbols ≤ and of substructures and ∼= of isomorphy. E.g. wewill write R ≤r S to indicate, that R is a subring of S, a i R toindicate that a is an ideal of R and R ∼=r S to indicate that R and Sare isomorphic as rings. Note that the latter is different from R ∼=m S(R and S are isomorphic as modules). And this can well make sense, ifR ≤r S is a subring, then S can be regarded as an R-module, as well.We will use the same subscripts for generated algebraic substructures,i.e. 〈• 〉r for rings, 〈• 〉i for ideals and 〈• 〉m for modules.

0.2 Notation and Symbols 13

Page 14: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Notation

A, B, C matrices, algebras and monoidsD, E, F fieldsG, H groups and monoidsI, J , K index setsL, M , N modulesP , Q subsets and substructuresR, S, T rings (all kinds of)U , V , W vectorspaces, multiplicatively closed setsX, Y , Z arbitrary sets

a, b, c elements of ringsd, e degree of polynomials, dimensione neutral element of a (group or) monoid

f , g, h functions, polynomials and elements of algebrasi, j, k, l integers and indicesm, n natural numbersp, q residue classesr, s elements of further ringss, t, u polynomial variablesu, v, w elements of vectorspacesx, y, z elements of groups and modules

α, β, γ multi-indicesι, κ (canonical) monomorphismsλ, µ eigenvalues%, σ (canonical) epimorphisms%, σ, τ permutationsϕ, ψ homomorphismsΦ, Ψ isomorphisms

Ω (fixed) finite set

a, b, c idealsu, v, w other idealsp, q prime idealsr, t other prime idealsm, n maximal idealsf, g, h fraction ideals

A, B, C (ordered) basesE , F (canonical = euclidean) bases

14 0 Prelude

Page 15: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

0.3 Mathematicians at a Glance

In the following we wish to give a short list of several mathematicians who’snames are attributed to some particular definition or theorem in honour oftheir outstanding contributions (to the field of algebra). Naturally this listis far from being exhaustive, and if any person has been omitted this issolely due to the author’s uninformedness. Likewise it is impossible to boildown an entire life to a few short sentences without crippling the biography.Nevertheless the author is convinced that it would be even more ignorant tomake no difference between words like noetherian and factorial. This is thereason why we chose not to say nothing about these avatars of mathematics.

• ?? Abel

• ?? Akizuki

• ?? Andre

• Emil Artin

• ?? Auslander

• ?? Bernstein

• ?? Bezout

• ?? Brauer

• Nicolas Bourbaki (??)

• ?? Buchsbaum

• ?? Cauchy

• ?? Cayley

• ?? Chevalley

• ?? Cohen

• ?? Dedekind

• ?? Eisenbud

• ?? Euclid

• Evariste Galois

• ?? Fermat

• Carl Friedrich Gauss

• ?? Gorenstein

• ?? Goto

• ?? Grobner

• ?? Grothendieck

0.3 Mathematicians at a Glance 15

Page 16: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• ?? Hensel

• David Hilbert

• ?? Hironaka

• ?? Hochster

• ?? Hopf

• ?? Jacobi

• ?? Jacobson

• ?? Kaplansky

• ?? Kei-ichi

• ?? Kronecker

• ?? Krull

• ?? Lagrange

• ?? Lasker

• ?? Legendre

• ?? Leibnitz

• ?? Macaulay

• ?? Mori

• ?? Nagata

• ?? Nakayama

• Johann von Neumann

• Isaac Newton

• Emmy Noether

• ?? Northcott

• ?? Ostrowski

• ?? Prufer

• ?? Pythagoras

• ?? Quillen

• ?? Ratliff

• ?? Rees

• ?? Riemann

• ?? Rotthaus

16 0 Prelude

Page 17: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• ?? Schur

• ?? Serre

• ?? Sono

• ?? Stanley

• Bernd Sturmfels

• ?? Sylvester

• ?? Watanabe

• ?? Weber

• ?? Wedderburn

• ?? Weierstrass

• Andre Weil

• ?? Yoneda

• ?? Zariski

• ?? Zorn

0.3 Mathematicians at a Glance 17

Page 18: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Part I

The Truth

Page 19: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 1

Groups and Rings

1.1 Defining Groups

The most familiar (and hence easiest to understand) objects of algebra arerings. And of these the easiest example are the integers Z. On these thereare two operations: an addition + and a multiplication ·. Both of theseoperations have familiar properties (associativity for example). So we firststudy objects with a single operation only - monoids and groups - as theseare a unifying concept for both addition and multiplication. However ouraim solely lies in preparing the concepts of rings and modules, so the readeris asked to venture lightly over problems within this section until he hasreached the later sections of this chapter.

(1.1) Definition:Let G 6= ∅ be any non-empty set and a binary operation on G, i.e. is amapping of the form : G × G → G : (x, y) 7→ xy. Then the ordered pair(G, ) is said to be a monoid iff it satisfies the following properties

(A) ∀x, y, z ∈ G : x(yz) = (xy)z

(N) ∃ e ∈ G ∀x ∈ G : xe = x = ex

Note that this element e ∈ G whose existence is required in (N) then alreadyis uniquely determined (see below). It is said to be the neutral elementof G. And therefore we may define: a monoid (G, ) is said to be a groupiff any element x ∈ G has an inverse element y ∈ G, that is iff

(I) ∀x ∈ G ∃ y ∈ G : xy = e = yx

Note that in this case the inverse element y of x is uniquely determined (seebelow) and we hence write x−1 := y. Finally a monoid (or group) (G, ) issaid to be commutative iff it satisfies the property

(C) ∀x, y ∈ G : xy = yx

(1.2) Remark:

• We have to append some remarks here: first of all we have employed afunction : G×G→ G. The image (x, y) of the pair (x, y) ∈ G×Ghas been written in an unfamiliar way however

xy := (x, y)

19

Page 20: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

If you have never seen this notation before it may be somewhat start-ling, in this case we would like to reassure you, that this actually isnothing new - just have a look at the examples further below. Yet thisnotation has the advantage of restricting itself to the essential. If wehad stuck to the classical notation such terms would be by far moreobfuscated. E.g. let us rewrite property (A) in classical terms

(x, (y, z)

)=

( (x, y), z

)• It is easy to see that the neutral element e of a monoid (G, ) is

uniquely determined: suppose that another element f ∈ G would sat-isfy ∀x ∈ G : xf = x = fx, then we would have ef = e by lettingx = e. But as e is a neutral element we have ef = f by (N) appliedwith x = f . And hence e = f are equal, i.e. e is uniquely determined.Hence in the following we will reserve the letter e for the neutral el-ement of the monoid regarded without specifically mentioning it. Incase we apply several monoids at once, we will name the respectiveneutral elements explictly.

• Property (A) is a very important one and hence it has a name of itsown: associativity. A pair (G, ) that satisfies (A) only also is calleda groupoid. We will rarely employ these objects however.

• Suppose (G, ) is a monoid with the (uniquely determined) neutralelement e ∈ G. And suppose x ∈ G is some element of G, that hasan inverse element. Then this inverse is uniquely determined: supposeboth y and z ∈ G satisfy xy = e = yx and xz = e = zx. Then theasociativity yields y = z, as we may compute

y = ye = y(xz) = (yx)z = ez = z

• Another important consequence of the associativity is the following:consider finitely many elements x1, . . . , xn ∈ G (where (G, ) is agroupoid at least). Then any application of parentheses to the productx1x2 . . . xn produces the same element of G. As an example consider

x1(x2(x3x4)) = x1((x2x3)x4) = (x1(x2x3))x4 = ((x1x2)x3)x4

In every step of these equalities we have only used the associativitylaw (A). The remarkable fact is that any product of any n elements(here n = 4) only depends on the order of the elements, not on thebracketing. Hence it is costumary to omit the bracketing altogether

x1x2 . . . xn :=(

(x1x2) . . .)xn ∈ G

And any other assingment of pairs (without changing the order) tothe elements xi would yield the same element as x1x2 . . . xn. A formalversion of this statement and its proof are given in chapter 16 of thisbook. The proof clearly will be done by induction on n ≥ 3, thefoundation of the induction precisely is the law of associativity.

20 1 Groups and Rings

Page 21: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Suppose (G, ) is any groupoid, x ∈ G is an element and 1 ≤ k ∈ N,then we abbreviate the k-nary product of x by xk, i.e. we let

xk := xx . . . x (k − times)

If (G, ) even is a monoid (with neutral element e) it is customary todefine x0 := e. Thus in this case xk ∈ G is defined for all k ∈ N. Nowsuppose that x even is invertible (e.g. if (G, ) is a group), then wemay even define xk ∈ G for any k ∈ Z. Suppose 1 ≤ k ∈ N, then

x−k :=(x−1

)k• In a commutative groupoid (G, ) we may even change the order in

which the elements x1, . . . , xn ∈ G are multiplied. I.e. if we are givena bijective map σ : 1 . . . n ←→ 1 . . . n on the indices 1 . . . n we get

xσ(1)xσ(2) . . . xσ(n) = x1x2 . . . xn

The reason behind this is the following: any permutation σ can bedecomposed into a series of transpositions (this is intuitively clear:we can generate any order of n objects by repeatedly interchangingtwo of these objects). In fact any transposition can be realized byinterchanging adjacent objects only. But any transposition of adjacentelements is allowed by property (C). A formal proof of this reasoningwill be presented in chapter 16 again.

(1.3) Example:

• The most familiar example of a (commutative) monoid are the naturalnumbers under addition: (N,+). Here the neutral element is given tobe e = 0. However 1 ∈ N has no inverse element (as for any a ∈ N wehave a+ 1 > 0) and hence (N,+) is no group.

• The integers however are a (commutative) group (Z,+) under addi-tion. The neutral element is zero again e = 0 and the inverse elementof a ∈ Z is −a. Thus N is contained in a group N ⊆ Z.

• Next we regard the non-zero rationals Q∗ := a/b | 0 6= a, b ∈ Z .These form a group under multiplication (Q∗, ·). The neutral elementis given to be 1 = 1/1 and the inverse of a/b ∈ Q∗ is b/a.

• Consider any non-empty set X 6= ∅. Then the set of maps from Xto X, which we denote by F(X) := σ | σ : X → X , becomes amonoid under the composition of maps (F(X), ) (as the compositionof functions is associative). The neutral element is the identity mape = 11X which is given to be 11X : x 7→ x.

• Consider any non-empty set X 6= ∅ again. Then the set of bijectivemaps SX := σ : X → X | σ bijective ⊆ F(X) on X even becomesa group under the composition of maps (SX , ). The neutral elementis the identity map e = 11X again and the inverse of σ ∈ SX is theinverse function σ−1. This will be continued in the next section.

1.1 Defining Groups 21

Page 22: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• (♦) A special case of the above is the set of invertible (n×n)-matricesgln(E) := A ∈ matn(E) | det(A) 6= 0 over a field (E,+, ·). This isa group (gln(E), ·) under the multiplication · of matrices.

(1.4) Remark:Consider a finite groupoid (G, ), that is the set G = x1, . . . , xn is finite.Then the composition can be given by a table of the following form

x1 x2 . . . xnx1 x1x1 x1x2 . . . x1xnx2 x2x1 x2x2 . . . x2xn...

......

...xn xnx1 xnx2 . . . xnxn

Such a table is also known as the Cayley diagram of G. As an exampleconsider a set with 4 elements K := e, x, y, z , then the following diagramdetermines a group structure on K ((K, ) is called the Klein 4-group).

e x y z

e e x y zx x e z yy y z e xz z y x e

(1.5) Proposition: (viz. 286)Let (G, ) be any group (with neutral element e), x, y ∈ G be any twoelements of G and k, l ∈ Z. Then we obtain the following identities

e−1 = e(x−1

)−1= x(

xy)−1

= y−1x−1

xkxl = xk+l(xk)l

= xkl

In particular the inversion i : G ←→ G : x 7→ x−1 of elements is a self-inverse, bijective mapping i = i−1 on the group G. If now xy = yx docommute then for any k ∈ Z we also obtain

xy = yx =⇒ (xy)k = xkyk

22 1 Groups and Rings

Page 23: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.6) Definition:If (G, ) and (H, ) are any two groups, whose neutral elements are e ∈ Gand f ∈ H respectively, then a map ϕ : G→ H is said to be a homorphismof groups or group-homorphism, iff it satisfies

∀x, y ∈ G we get ϕ(xy) = ϕ(x)ϕ(y)

And in this case ϕ already links the neutral elements and maps inverseelements to the inverse, that is it satisfies ϕ(e) = f and

∀x ∈ G we get ϕ(x−1) = ϕ(x)−1

Let us introduce the set of all group-homorphisms from G to H to be called

ghom(G,H) := ϕ : G→ H | ∀x, y ∈ G : ϕ(xy) = ϕ(x)ϕ(y)

Prob clearly ϕ(e) = ϕ(ee) = ϕ(e)ϕ(e) but as H is a group we may multiplyby ϕ(e)−1 to find f = ϕ(e). Therefore f = ϕ(e) = ϕ(xx−1) = ϕ(x)ϕ(x−1).Likewise we get f = ϕ(x−1)ϕ(x) and hence ϕ(x−1) = ϕ(x)−1.

(1.7) Proposition: (viz. 287)Let (G, ) be a group with neutral element e ∈ G, then a subset P ⊆ G issaid to be a subgroup of G (written as P ≤g G) iff it satisfies

e ∈ P

x, y ∈ P =⇒ xy ∈ Px ∈ P =⇒ x−1 ∈ P

In other words P ⊆ G is a subgroup of G iff it is a group (P, ) under theoperation inherited from G. And in this case we obtain an equivalencerelation on G by letting (for any x, y ∈ G)

x ∼ y :⇐⇒ y−1x ∈ P

And for any x ∈ G the equivalence class of x is thereby given to be thecoset xP := [x] = xp | p ∈ P . We thereby define the index [G : P ] of Pin G to be the following cardinal number

G/P := G/

∼[G : P ] :=

∣∣∣G/P ∣∣∣And thereby we finally obtain the following identity of cardinals which iscalled the Theorem of Lagrange (and which means G ←→ (G/P )× P )

|G| = [G : P ] · |P |

1.1 Defining Groups 23

Page 24: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.8) Proposition: (viz. 288)Let (G, ) be any monoid with neutral element e and X ⊆ G be any subsetof G. Then the intersection of all submonoids of G that contain X is asubmonoid of G again

〈X 〉o :=⋂P ≤o G | X ⊆ P ≤o G

We call 〈X 〉o the monoid generated by X ⊆ G. And letting Xe := X∪ e we can even give an explicit description of 〈X 〉o to be the following set

〈X 〉o = x1x2 . . . xn | n ∈ N, x1, . . . , xn ∈ Xe

If G even is a group, then we can regard the intersection of all subgroups ofG that contain X and this is a subgroup of X again

〈X 〉g :=⋂P ≤o G | X ⊆ P ≤g G

Likewise ee call 〈X 〉g the group generated by X ⊆ G. And letting X± :=X ∪ e ∪

x−1 | x ∈ X

we can even give an explicit description of 〈X 〉g

to be the following set

〈X 〉g = x1x2 . . . xn | n ∈ N, x1, . . . , xn ∈ X±

(1.9) Proposition: (viz. 289)If (G, ) is a group and x ∈ G is any element of G, then the above propositiontells us that the subgroup generated by x is given to be

〈x 〉g =xk | k ∈ Z

If now G is finite, that is n := #G ∈ N, then 〈x 〉g (being a subset) is finitetoo. In particular we may define the order of x to be number of elementsof 〈x 〉g. And thereby we find

ord(x) := #〈x 〉g = min 1 ≤ k ∈ N | xk = e

That is k = ord(x) is the minimal number 1 ≤ k ∈ N such that xk = e.And the theorem of Lagrange even tells us that k divides n, that is

ord(x) | #G

24 1 Groups and Rings

Page 25: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.2 Permutations

In this section we will study the most important and most general exampleof groups: permutations. Permutation groups will not become importantuntil we reach determinants in chapter 4. So a novice might well skip theentire section and return to it only later. In fact we even recommend thisapproach, as we deem the basics of ring theory to be easier than the basicsof group theory. Also we will use some notation (namely products and thefaculty) that will only be introduced in sections 1.3 and 1.5 respectively.

Later in section ?? we will introduce group actions, that are a gener-alisation of the permutation action. Also permutations are used to studycojugation in a group and they are occasionally useful in other areas as well.In fact they will be applied in Galois, representation and invariant theory.

(1.10) Definition: (viz. 289)IfX 6= ∅ is any non-empty set then we define the group SX of permutationsof X to be the set of all bijective maps on X, that is

SX := σ : X → X | σ is bijective

Thereby SX truly becomes a group (SX , ) under usual the composition of functions. An in case of X = 1 . . . n we will write

Sn := σ : (1 . . . n)→ (1 . . . n) | σ is bijective

(1.11) Definition:Fix n ∈ N with n ≥ 1 and consider the perumation group (Sn, ). Then, forany permutation σ ∈ Sn and any i ∈ 1 . . . n, we define

fix(σ) := i ∈ 1 . . . n | σ(i) = i dom(σ) := i ∈ 1 . . . n | σ(i) 6= i

cyc(σ, i) :=σk(i) | k ∈ N

`(σ, i) := #cyc(σ, i)

Thereby fix(σ) is called the fixed point set of σ and its complement dom(σ)is called the domain of σ. Also cyc(σ, i) is called the cycle of i under σand `(σ, i) is said to be the length of that cycle.

(1.12) Proposition: (viz. 289)

(i) If X is a finite set, then the group (SX , ) is finite again, in fact itcontains precisely (#X)! elements, formally that is

#SX = (#X)!

1.2 Permutations 25

Page 26: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) If (G, ) is any group, then G can be embedded into (SG, ). To beprecise the following map is well-defined and injective

L : G → SG : g 7→ Lg

Lg : G→ G : x 7→ gx

and a homorphism of groups, that is: Le = 11G and for any g, h ∈ Gwe get Lgh = LgLh. Nota in this sense the permutation groups SXare the most general groups whatsoever.

(1.13) Proposition: (viz. 290)Fix n ∈ N with n ≥ 1 and consider the perumation group (Sn, ). Then, forany permutations % and σ ∈ Sn and any i ∈ 1 . . . n, we find the followingstatements

(i) Asσk | k ∈ Z

is a subgoup of the finite group Sn we find that the

cycle cyc(σ, i) if finite, more precisely

`(σ, i) ≤ ord(σ) ≤ n!

(ii) The domain of σ is stable under σ, that is for any a ∈ dom(σ) we findthat σ(a) ∈ dom(σ) again, such that

σ(dom(σ)) = dom(σ)

In particular, if i ∈ dom(σ) is contained in the domain of σ, then σk(i)is contained in the domain of σ, for any power k ∈ N, such that

cyc(σ, i) ⊆ dom(σ)

(iii) If % and σ share a common domain D ⊆ 1 . . . n and agree on D, thenthey already are equal. Formally that is the equivalence

D := dom(%) = dom(σ)∀ a ∈ D : %(a) = σ(a)

⇐⇒ % = σ

(iv) If the domains of % and σ are disjoint, then % and σ commute, formally

dom(%) ∩ dom(σ) = ∅ =⇒ %σ = σ%

(1.14) Definition:Fix n ∈ N with n ≥ 1 and consider an `-tuple (i1, . . . , i`) of natural numbersik ∈ 1 . . . n that are pariwise distinct (that is ij = ik =⇒ j = k). Then thetuple (i1, . . . , i`) gives rise to a permuation ζ ∈ Sn by letting

ζ(a) :=

a if a ∈ (1 . . . n) \ i1, . . . , i` ik+1 if a = ik for some k ∈ 1 . . . `− 1i1 if a = i`

26 1 Groups and Rings

Page 27: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

This permutation ζ ∈ Sn is called a cycle of length ` and (by slight abuseof notation, as n is not specified) we will denote this cycle as

(i1 i2 . . . i`) := ζ

A cycle of length 2 is said to be a transposition of Sn. And by definitionit interchanges two numbers i 6= j. That is, if τ = (i j) then τ(i) = j andτ(j) = i and τ(a) = a for any other a ∈ 1 . . . n.

(1.15) Remark: (viz. 291)

• The notation of cycles is ambigious as it is of no effect with whichelement the cycle starts, as long as the ordering of elements is notchanged. That is (i1 i2 . . . i`) and (i2 i3 . . . i` i1) and so on until(i` i1 i2 . . . i`−1) all yield the same permutation. In particular wehave (i j) = (j i) for any transposition.

• Any cycle ζ can be written as a composition of transpositions. To beprecise: given the cycle (i1 i2 . . . i`) we can rewrite this one, as

(i1 i2 . . . i`) = (i1 i`)(i1 i`−1) . . . (i1 i2)

• Any transposition can be written as a composition of transpositionsof adjacent elements. To be precise given any i, j ∈ 1 . . . n with i < jwe can rewrite

(i j) = (i i+ 1)(i+ 1 i+ 2) . . . (j − 1 j)(j − 2 j − 1) . . . (i i+ 1)

• If ζ = (i1 i2 . . . i`) ∈ Sn is a cycle of length `, then for any otherpermutation σ ∈ Sn the conjugate σζσ−1 is another cycle of length `.In fact we get

σζσ−1 = (σ(i1) σ(i2) . . . σ(i`))

(1.16) Proposition: (viz. 292)Fix n ∈ N with n ≥ 1 and consider the perumation group (Sn, ). If nowσ ∈ Sn is any permutation with σ 6= 11 then the following five statementsare equivalent:

(a) σ is a cycle, that is there is some ` ≥ 2 and some pairwise disticti1, i2, . . . , i` ∈ 1 . . . n (that is ij = ik =⇒ j = k) such that

σ = (i1 i2 . . . i`)

(b) Any two elements a, b ∈ dom(σ) in the domain of σ are linked, i.e. thereis some k ∈ N such that b = σk(a).

(c) For any element i ∈ dom(σ) the domain of σ already is generated bythe orbit of i under σ, that is

dom(σ) =σk(i) | k ∈ N

1.2 Permutations 27

Page 28: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(d) There is some i ∈ 1 . . . n such that the domain of σ is the orbit of iunder σ, that is

dom(σ) =σk(i) | k ∈ N

(e) If ` := ord(σ) denotes the order of σ, then there is some i ∈ 1 . . . n

such that the domain of σ is given to be

dom(σ) =σk(i) | k ∈ 0 . . . (`− 1)

And in this case the length ` of the cycle σ = (i1 i2 . . . i`) is precisely theorder of σ that is we get the following equality

ord(σ) = ` in (a)

(1.17) Proposition: (viz. 294)Fix n ∈ N with n ≥ 1 and consider the perumation group (Sn, ). Then, forpermutation σ ∈ Sn and any i ∈ 1 . . . n, we find the following statements

(i) We obtain an equivalency relation on 1 . . . n by letting i ∼ j if andonly if there is some k ∈ N such that j = σk(i). And the equivalenceclasses under this relation are precisely

[i] = cyc(σ, i) =i, σ(i), . . . , σk(i)(i)

where k(i) := `(σ, i)−1. In particular cyc(σ, i) is σ-invariant, formallythat is σ(cyc(σ, i)) = cyc(σ, i). Let us now regard

ζi : 1 . . . n→ 1 . . . n

σ(a) if a ∈ cyc(σ, i)a if a 6∈ cyc(σ, i)

Then ζi is precisely the cycle induced by σ on i ∈ 1 . . . n, formallyagain

ζi = (i σ(i) . . . σk(i)(i))

And if i(1), . . . , i(r) is a system of representants of dom(σ) under therelation ∼, that is dom(σ) is the disjoint union of the cycles cyc(σ, i(1))to cyc(σ, i(r)), then σ can be written as the composition of the cyclesζi(i) to ζi(r) (in any order, as all the cycles are mutually commutative)

dom(σ) =r⋃j=1

cyc(σ, i(j)) =⇒ σ =r∏j=1

ζi(j)

(ii) Let ζ1, . . . , ζk ∈ Sn and ξ1, . . . , ξt ∈ Sn be two sets of cycles, withpairwise disjoint domains, that is

i 6= j ∈ 1 . . . k =⇒ dom(ζi) ∩ dom(ζj) = ∅r 6= s ∈ 1 . . . t =⇒ dom(ξr) ∩ dom(ξs) = ∅

28 1 Groups and Rings

Page 29: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

If now the compositions ζ1 . . . ζk = ξ1 . . . ξt are equal, then we alreadyget k = t and there is some permutation α ∈ Sr such that for anyi ∈ 1 . . . k we get ζi = ξα(i)

k∏i=1

ζi =

t∏r=1

ξr =⇒ ∀ i ∈ 1 . . . k : ζi = ξα(i)

(iii) For any permutation σ ∈ Sn with σ 6= 11 there is a uniquely determinedcyclic decomposition. That is there are uniquely determined cyclesζ1, . . . , ζr ∈ Sn with pairwise disjoint domains (for any i, j ∈ 1 . . . rthat is i 6= j =⇒ dom(ζi)∩ dom(ζj) = ∅), such that σ can be writtenas a composition of these cycles

σ =r∏i=1

ζi

(iv) Any permutation σ ∈ Sn is a product of transpositions of two adjacentnumbers. That is there are finitely many i0, . . . is ∈ N such that forany j ∈ 1 . . . r we get |ij − ij−1| = 1 and also

σ =s∏j=1

(ij−1 ij)

(v) Two permutations % and σ ∈ Sn share the same cyclic structure ifand only if they are conjugates of each other. Formally that is theequivalence of the following two statements

(a) If we decompose % = ζ1 . . . ζk and σ = ξ1 . . . ξt into cycles, ζi andξr respectively, with pairwise disjoint domains (as in (ii)), thenwe get k = t and these cycles are of the same lengths. That isthere is a permutation α ∈ Sk such that

∀ i ∈ 1 . . . k : ord(ζi) = ord(ξα(i))

(b) There is some permutation π ∈ Sn such that σ = π%π−1.

(1.18) Example: Cyclic Decomposition:Let us present a short example of a cyclic decomposition in 1 . . . 9. To defineor permutation σ ∈ S9 we present a short table

i 1 2 3 4 5 6 7 8 9

σ(i) 3 4 1 8 7 6 9 2 5

First of all it is clear that 6 is fixed under σ, hence it will not appear inour cyclic decomposition (it would yield a cycle of length 1, namely (6)).Now begin by 1 and extract its cycle. We see that 1 7→ 3 7→ 1 so our firstcycle is (1 3). The next number not dealt with is 2, which is cycled like2 7→ 4 7→ 8 7→ 2. So the second cycle is (2 4 8). As 3 and 4 occured in thefirst and second cycles respectively, we continue with 5 which is moved like5 7→ 7 7→ 9 7→ 5. Thus the third cycle is (5 7 9).

1.2 Permutations 29

Page 30: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

As 6 is fixed and 7 . . . 9 have already been taken care of we have exhaustedthe whole domain of σ. Altogether we have seen that

σ = (1 3)(2 4 8)(5 7 9)

Note that these cycles are uniquely determined even if there are several waysof putting them, e.g. (2 4 8) = (4 8 2) = (8 2 4). Also it is possible to com-mute these cycles arbitrarily, without changing σ. Finally we know, thatany other permutation % ∈ S9 is conjugated to σ, if and only if it consistsof 3 cycles of lengths 2,3,3 respectively.

(1.19) Definition:Fix any n ∈ N with n ≥ 1. If now σ is any permutation then we define itssignum to be the following number

sgn(σ) :=∏

1≤i<j≤n

σ(j)− σ(i)

j − i

It will turn out that the signum is contained in sgn(σ) ∈ Z∗ = −1,+1 . Apermutation σ with sgn(σ) = 1 is said to be even, whereas a permutationwith sgn(σ) = −1 is said to be odd. The set of all even permutations willbe denoted, by

An := σ ∈ Sn | sgn(σ) = 1

(1.20) Remark:The product, that has been used to define the signum of σ might be alittle obfuscated at a first glance. But in fact the indices simply run overthe set ∆ :=

(i, j) ∈ (1 . . . n)2 | i < j

. And as the multiplication in Q is

commutative the ordering of this set ∆ has no effect on the result of theproduct. In yet other words we have

sgn(σ) =n−1∏i=1

n∏j=i+1

σ(j)− σ(i)

j − i

As an example let us consider the case τ = (1 3) ∈ S4. Then the signum ofthe transposition τ can be computed, as

σ(τ) =2− 3

2− 1· 1− 3

3− 1· 4− 3

4− 1· 1− 2

3− 2· 4− 2

4− 2· 4− 1

4− 3= −1

(1.21) Proposition: (viz. 297)

(i) If σ ∈ Sn is any permutation, then the signum of σ can be evaluatedby couning the number of inversions of σ. That is the signum is givento be sgn(σ) = (−1)I(σ) where

I(σ) := #

(i, j) ∈ (1 . . . n)2 | i < j and σ(i) > σ(j)

30 1 Groups and Rings

Page 31: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) Let σ ∈ Sn be a permutation and σ = ζ1 . . . ζr be its decompositioninto cycles with pairwise disjoint domains. Let `(i) := ord(ζi) be thelength of the i-th cycle. Then the order of σ is the least commonmultiple of the `(i) and we can easily compute the signum of σ as well

ord(σ) = lcm(`(1), . . . , `(r))

sgn(σ) =r∏i=1

(−1)`(i)−1

(iii) If σ ∈ Sn is decomposed into the transpositions τi that is σ = τ1τ2 . . . τrwhere (for any i ∈ 1 . . . r) τi is a cycle of length 2, then the signum ofσ is just

sgn(σ) = (−1)r

(iv) For any n ∈ N with n ≥ 1 the signum is a group homorphism of theform sgn : Sn → Z∗. That is for any %, σ ∈ Sn we get

sgn(%σ) = sgn(%) · sgn(σ)

(v) (♦) For any n ∈ N with n ≥ 2 we find that An n Sn is a normalsubgroup of Sn and we can even give its number of elements explictly

#An =n!

2

1.2 Permutations 31

Page 32: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.3 Defining Rings

In the previous section we have introduced an atomic component - groups(and their generalisations monoids and groupoids). We have also introducedsome notation, that we employ whenever we are dealing with these objects.We will glue two groupoids over the same set R together - that is we con-sider a set equipped with two binary relations + and · that are compatiblein some way (by distributivity). Note that we again write a + b instead of+(a, b) and likewise a · b (or ab only) for ·(a, b). These objects will be calledrings and we will dedicate the entire first part of this book to the study ofwhat structures rings may have.

(1.22) Definition:Let R 6= ∅ be any non-empty set and consider two binary operations + calledaddition and · called multiplication on R

+ : R×R→ R : (a, b) 7→ a+ b

· : R×R→ R : (a, b) 7→ a · b

Then the ordered tripel (R,+, ·) is said to be ringoid iff it satisfies both ofthe following properties (A) and (D)

(A) (R,+) is a commutative group, that is the addition is associative andcommutative, admits a neutral element (called zero-element, denotedby 0) and every element of R has an additive inverse. Formally

∀ a, b, c ∈ R : a+ (b+ c) = (a+ b) + c

∀ a, b ∈ R : a+ b = b+ a

∃ 0 ∈ R ∀ a ∈ R : a+ 0 = a

∀ a ∈ R ∃n ∈ R : a+ n = 0

Note that the zero-element 0 thereby is already uniquely determinedand hence the fourth property makes sense. Further, if we are givena ∈ R, then the element n ∈ R with a+n = 0 is uniquely determined,and we call it the negative of a, denoted by −a := n.

(D) The addition and multiplication on R respect the following distribu-tivity laws, i.e. ∀ a, b, c ∈ R we have the properies

a · (b+ c) = (a · b) + (a · c)(a+ b) · c = (a · c) + (b · c)

32 1 Groups and Rings

Page 33: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.23) Definition:In the following let (R,+, ·) be a ringoid, then we consider a couple ofadditional properties (S), (R) and (C), that R may or may not have

(C) The multiplication · on R is commutative, that is we have the property

∀ a, b, c ∈ R : a · b = b · a

(S) The multiplication · on R is associative, that is we have the property

∀ a, b, c ∈ R : a · (b · c) = (a · b) · c

(R) The multiplication · on R admits a neutral element 1, that is we have

∃ 1 ∈ R ∀ a ∈ R : 1 · a = a = a · 1

Note that the neutral element 1 of R in this case already is uniquelydetermined - it will be called the unit-element of R.

(F) Let us assume properties (S) and (R), then we define property (F):every non-zero element R has an inverse element, that is

∀ 0 6= a ∈ R ∃ i ∈ R : a · i = 1 = i · a

Note that 1 is given by (R) and due to (S) the element i ∈ R is uniquelydetermined by a. We call i the inverse of a, denoted by a−1 := i.

Using these properties we define the following notions: let (R,+, ·) be aringoid, then (R,+, ·) is even said to be commutative resp. called a semi-ring, ring, skew-field of even field, iff

commutative :⇐⇒ (C)semi-ring :⇐⇒ (S)

ring :⇐⇒ (S) and (R)commutative ring :⇐⇒ (S), (R) and (C)

skew-field :⇐⇒ (S), (R), (F) and 0 6= 1field :⇐⇒ (S), (R), (C), (F) and 0 6= 1

Nota in the mathematical literature the word ring may have many differentmeanings, depending of what topic a given text pursues. While some authorsdo not assume rings to have a unit (or even not be associative) others assumethem to always be commutative. Hence it also is customary to speak of anon-unital ring in case of a semi-ring and of a unital ring in case of a ring.

(1.24) Remark:Let (R,+, ·) be a ringoid, then we simplify our notation somewhat by intro-ducing the following conventions

• The zero-element of any ringoid R is denoted by 0. If we deal withseveral ringoids simultaneously then we will write 0R to emphasise,which ringoid the zero-element belongs to.

• Analogously the unit element of any ring R is denoted by 1 and by 1Rif we wish to stress the ring R. For some specific rings we will use adifferent symbol than 1, e.g. 1 or 11 .

1.3 Defining Rings 33

Page 34: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• In order to save some parentheses we agree that the multiplication · isof higher priority than the addition +, e.g. ab+c abbreviates (a ·b)+c.

• The additive inverse is sometimes used just like a binary operation,i.e. a− b stands shortly for a+ (−b).

• We have introduced the multiplicative inverse a−1 of a and we use theconvention that the inversion is of higher priority, than the multipli-cation itself, i.e. ab−1 stands shortly for a · (b−1).

• If a1, . . . , an ∈ R are finitely many elements of R, then we denote thesum and product of these to be the following

n∑i=1

ai :=(

(a1 + a2) + . . .)

+ an

n∏i=1

ai :=(

(a1 · a2) · . . .)· an

• For some integer k ∈ Z we now introduce the following abbreviations

ka :=

∑k

i=1 a for k > 00 for k = 0∑−k

i=1(−a) for k < 0

ak :=

∏ki=1 a for k > 01 for k = 0∏−k

i=1(a−1) for k < 0

For the definition of ak we, of course, required R to be a ring in thecase k = 0 and even that a is invertible (refer to the next section forthis notion) in the case k < 0.

(1.25) Remark:

• Consider any ringoid (R,+, ·), then the addition + on R is associativeby definition. Hence the result of the sum a1 + · · ·+ an ∈ R does notdepend on the way we applied parentheses to it (cf. (1.2)). Thereforewe may equally well omit the bracketing in the summation, writing

a1 + · · ·+ an :=

n∑i=1

ai

The same is true for the multiplication · in a semi-ring (R,+, ·). Soby the same reasoning we omit the brackets in products, writing

a1 . . . an :=n∏i=1

ai

34 1 Groups and Rings

Page 35: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Yet by definition the addition + of a ringoid (R,+, ·) also is commu-tative. Thus the sum a1 + · · · + an ∈ R does not even depend onthe ordering of the ai (refer to (1.2) again). Now take any finite setΩ and regard a mapping a : Ω → R. Then we may even define thesum over the unordered set Ω. To do this fix any bijection of the formσ : 1 . . . n ←→ Ω (in particular n := #Ω). Then we may define

∑i∈Ω

a(i) :=

0 if Ω = ∅∑n

i=1 a(σ(i)) if Ω 6= ∅

The same is true for the multiplication · in a commutative semi-ring(R,+, ·). So given a : Ω→ R and σ as above we define (note that thecase Ω = ∅ even requires R to be a commutative ring)

∏i∈Ω

a(i) :=

1 if Ω = ∅∏n

i=1 a(σ(i)) if Ω 6= ∅

• If I is infinite but the set Ω := i ∈ I | a(i) 6= 0 is finite still we writean infinte sum over all i ∈ I, that actually is defined by a finite sum∑

i∈Ia(i) :=

∑i∈Ω

a(i)

If R is a commutative ring and analogously Ω := i ∈ I | a(i) 6= 1 isfinite then the same is true for the infinite product∏

i∈Ia(i) :=

∏i∈Ω

a(i)

1.3 Defining Rings 35

Page 36: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.4 Examples

(1.26) Example:

• The most well-known example of a commutative ring are the integers(Z,+, ·). And they are a most beautiful ring indeed - e.g. Z does notcontain zero-divisors (that is ab = 0 implies a = 0 or b = 0). Andwe only wish to remark here that Z is an UFD (that is any a ∈ Zadmits a unique primary decomposition) - this will be the formulatedand proved as the fundamental theorem of arithmetic.

• Now regard the set 2Z := 2a | a ∈ Z of even integers. It is easy tosee, that the sum and product of even numbers again is even. Hence(2Z,+, ·) is a (commutative) semi-ring under the operations inheritedfrom Z. Yet it does not contain a unit element 1 and hence does notqualify to be a ring.

• Next we will regard a somewhat strange ring, the so called zero-ringZ. As a set it is defined to be Z := 0 (with 0 ∈ Z if you wish).As it solely contains one point 0 it is clear how the binary operationshave to be defined 0 + 0 := 0 and 0 · 0 := 0. It is esy to check that(Z,+, ·) thereby becomes a commutative ring in which 1 = 0. We willsoon see that Z even is the only ring in which 1 = 0 holds true. Inthis sense this ring is a bit pathological.

• Now consider an arbitrary (commutative) ring (R,+, ·) and a non-empty set I 6= ∅. Then the set RI of functions from I to R can beturned into another (commutative) ring (RI ,+, ·) under the pointwiseoperations of functions

RI := f | f : I → R

f + g : I → R : i 7→ f(i) + g(i)

f · g : I → R : i 7→ f(i) · g(i)

• Let X be any set and denote its power set by P(X) := A | A ⊆ X .And for any subsets A, B ⊆ X we define the symmetric differenceas A∆B := (A ∪ B) \ (A ∩ B) = (A \ B) ∪ (B \ A) ⊆ X. Thereby(P(X),∆,∩) becomes a commutative ring with zero-element ∅ andunit element X. And if A ⊆ X is any subset, then the additiveinverse of A is just −A = A again.

• The next ring we study are the rationals (Q,+, ·). It is well-known thatQ even is a field - if a/b 6= 0 then a 6= 0 and hence (a/b)−1 = (b/a)is invertible. We will even present a construction of Q from Z in aseperate example in this section.

36 1 Groups and Rings

Page 37: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• (♦) Another well-known field are the reals (R,+, ·). They may beintroduced axiomatically (as a positively ordered field satisfying thesupremum-axiom), but we would like to sketch an algebraic construc-tion of R. Let us denote the sets of Cauchy-sequences and zero-sequences over the rationals Q respectively

C :=

(qn) ⊆ Q

∀ k ∈ N ∃n(k) ∈ N∀m,n ≥ n(k) : |qn − qm| < 1/k

z :=

(qn) ⊆ Q

∀ k ∈ N ∃n(k) ∈ N∀n ≥ n(k) : |qn| < 1/k

Then C becomes a commutative ring under the pointwise operationsof mappings, that is (pn) + (qn) := (pn + qn) and (pn)(qn) := (pnqn).And z i C is a maximal ideal of C (cf. to section 2.1). Now the realscan be introduced as the quotient (cf. to section 1.6) of C modulo z

R := C/z

As z has been maximalR becomes a field. Thereby Q can be embeddedinto R, as Q → R : q 7→ (q). And we obtain a positive order on R byletting (pn)+z ≤ (qn)+z :⇐⇒ ∃m ∈ N ∀n ≥ m : pn ≤ qn. Note thatthis completion ofQ (that is Cauchy-sequences modulo zero-sequences)can be generalized to arbitrary metric spaces. Its effect is that the re-sulting space becomes complete (that is Cauchy sequences are alreadyconvergent). In our case this also guarantees the supremum-axiom.Hence this construction truly realizes the reals.

• Most of the previous examples have been quite large (considering thenumber of elements involved). We would now like to present an exam-ple of a finite field consisting of precisely four elements F = 0, 1, a, b .As F is finite, we may give the addidion and multiplication in tables

+ 0 1 a b

0 0 1 a b1 1 0 b aa a b 0 1b b a 1 0

· 0 1 a b

0 0 0 0 01 0 1 a ba 0 a b 1b 0 b 1 a

1.4 Examples 37

Page 38: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.27) Example:Having introduced Z, Q and the reals R we would like to turn our attentionto another important case - the residue rings of Z. First note, that theintegers Z allow a division with remainder (cf. to section 2.6). That is if welet 2 ≤ n ∈ N and a ∈ Z then there are uniquely determined numbers q ∈ Zand r ∈ 0 . . . (n− 1) such that

a = qn+ r

Thereby r is called the remainder of a in the integer division of a times n.Formally one denotes (which is common in programming languages)

a div n := q

a mod n := r

And by virtue of this operation ”mod” we may now define the residue ring(Zn,+, ·) of Z modulo n. First let Zn := 0, 1, . . . , n− 1 as a set. And ifa, b ∈ Zn, then we define the operations

a+ b := (a+ b) mod n

a · b := (a · b) mod n

Thereby (Zn,+, ·) will in fact be a commutative ring. And Zn will be afield if and only if n is a prime number. We will soon give a more natrualconstruction of Zn as the quotient Z/nZ, as this will be far better suited toderive the properties of this ring.

(1.28) Example:Let (R,+, ·) be an integral domain, that is R 6= 0 is a non-zero commutativering such that for any two elements a, b ∈ R we have the implication

ab = 0 =⇒ a = 0 or b = 0

Then we take to the set X := R × (R \ 0 ) and obtain an equivalencerelation ∼ on X by letting (where (a, u) and (b, v) ∈ X)

(a, u) ∼ (b, v) :⇐⇒ av = bu

If now (a, u) ∈ X then we denote its equivalence class by a/u, and let usdenote the quotient set of X modulo ∼ by Q. Formally that is

a

u:= (b, v) ∈ X | av = bu

Q := au

∣∣∣ a, b ∈ R, b 6= 0

Thereby Q becomes a commutative ring (with zero-element 0/1 and unitelement 1/1) under the following (well-defined) addition and multiplication

a

u+a

u:=

av + bu

uva

u· au

:=ab

uv

38 1 Groups and Rings

Page 39: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

In fact (Q,+, ·) even is a field, called the quotient field of R. This is dueto the fact that the inverse of a/u 6= 0/1 is given to be (a/u)−1 = u/a.This construction will be vastly generalized in section 2.9. Note that this isprecisely the way to obtain the rationals Q := Q by starting with R = Z.

(1.29) Example:Consider an arbitrary ring (R,+, ·) and 1 ≤ n ∈ N, then an (n× n)-matrixover R is a mapping of the form A : (1 . . . n)× (1 . . . n)→ R. And we denotethe set of all such by matnR := A | A : (1 . . . n)× (1 . . . n)→ R . Usuallya matrix is written as an array of the following form (where ai,j := A(i, j))

A =(ai,j

)=

a1,1 · · · a1,n...

. . ....

an,1 · · · an,n

And using this notation we may turn matnR into a (non-commutative evenif R is commutative) ring by defining the addition and multiplication(

ai,j

)+(bi,j

):=

(ai,j + bi,j

)(ai,j

)·(bi,j

):=

(n∑s=1

ai,s bs,j

)It is immediately clear from the construction, that the zero-element, resp. theunit-element are given by the zero-matrix and unit-matirx respectively

0 =

0 · · · 0...

. . ....

0 · · · 0

1 =

1 0. . .

0 1

(1.30) Example:Let now (R,+, ·) be any commutative ring, the we define the (commutative)ring of formal power series over R, as the following set

R[[t]] := f | f : N→ R

And if f ∈ R[[t]] then we will write f [k] instead of f(k), that is f : k 7→ f [k].Further we denote f as (note that this is purely notational and no true sum)

∞∑k=0

f [k] tk := f

1.4 Examples 39

Page 40: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And thereby we may define the sum and product of two formal power seriesf , g ∈ R[[t]], by pointwise summation f+g : k 7→ f [k]+g[k] and an involutionproduct f · g : k 7→ f [0]g[k] + f [1]g[k − 1] + · · · + f [k]g[0]. In terms of theabove notation that is

f + g :=∞∑k=0

(f [k] + g[k]

)tk

f · g :=

∞∑k=0

∑i+j=k

f [i]g[j]

tk

An elementary computation (cf. section 7.3) shows that (R[[t]],+, ·) trulybecomes a commutative ring. The zero-element of this ring is given by0 : k 7→ 0 and the unit element is 1 = t0 : k 7→ δ(k, 0). The elementt : k 7→ δ(k, 1) plays another central role. An easy computation shows thatfor any n ∈ N we get tn : k 7→ δ(k, n). If now 0 6= f ∈ R[[t]] is a non-zeroformal power series we define the degree of f to be

deg(f) := sup k ∈ N | f [k] 6= 0 ∈ N ∪ ∞

That is deg(f) < ∞ just states that the set k ∈ N | f [k] 6= 0 is a finitesubset of N. And thereby we define the polynomial ring over R to be

R[t] := f ∈ R[[t]] | deg(f) <∞

By definition R[t] ⊆ R[[t]] is a subset, but one easily verifies, that R[t] evenis another commutative ring under the addition and multiplication inheritedfrom R[[t]]. Further note that every polynomial f ∈ R[t] now truly is a finitesum of the form

f =

∞∑k=0

f [k] tk

(1.31) Example: (♦)The next example is primarily concerned with ideals, not with rings, andshould hence be postponed until you want to study an example of these.Now fix any (commutative) base ring (R,+, ·) and a non-empty index setI 6= ∅. We have already introduced the ring S := RI = F(I,R) of functionsfrom I to R under the pointwise operations. For any subset A ⊆ I we nowobtain an ideal of S by letting

v(A) := f ∈ S | ∀ a ∈ A : f(a) = 0

And for any two subsets A, B ⊆ I it is easy to prove the following identities

v(A) v(B) = v(A) ∩ v(B) = v(A ∪B)

v(A) + v(B) = v(A ∩B)

40 1 Groups and Rings

Page 41: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Clearly (if I contains at least two distinct points) S contains zero-divisors.And if I is infinite then S is no noetherian ring (we obtain a stricly ascendingchain of ideals by letting ak := v(I \ a1, . . . , ak ) for a seqence of pointsai ∈ I). In this sense S is a really bad ring. On the other hand this ringcan easily be dealt with. Hence we recommend looking at this ring whensearching for counterexamples in general ring theory. Finally if A ⊆ I is afinite set then the radical of the ideal v(A) can also be given explictly√

v(A) = f ∈ S | ∀ a ∈ A : f(a) ∈ nilR

Prob in the first equalty v(A)∩v(B) = v(A∪B) is clear and the containmentv(A) v(B) ⊆ v(A) ∩ v(B) is generally true. Thus suppose h ∈ v(A ∪B)

f(a) :=

0 if a ∈ A1 if a 6∈ A and g(a) :=

0 if a ∈ B

h(a) if a 6∈ B

then f ∈ p(A) and g ∈ v(B) are clear and also h = fg. Thus we have provedv(A∪B) ⊆ v(A)∩v(B) and thereby finished the first set of identities. In thesecond equality v(A) + v(B) ⊆ v(A ∩B) is clear. And if we are converselygiven any h ∈ v(A ∩B) then let us define

f(a) :=

0 if a ∈ A

h(a) if a 6∈ A and g(a) :=

h(a) if a ∈ A \B

0 if a 6∈ A \B

thereby it is clear that f ∈ v(A) and g ∈ v(B) and an easy distinction ofcases truly yields h = f+g. Now consider some f ∈ S with fk ∈ v(A). Thatis for any a ∈ A we get f(a)k = fk(a) = 0. In other words f(a) ∈ nilR forany a ∈ A. Conversely suppose that A is finte and that for any a ∈ A weget f(a) ∈ nilR. That is for any a ∈ A there is some k(a) ∈ N such thatfk(a)(a) = 0. By taking k := max k(a) | a ∈ A , we find fk ∈ v(A).

(1.32) Example:The complex numbers (C,+, ·) also form a field (that even has a lot ofproperties that make it better-behaved than the reals). We want to presentthree ways of introducing C now (note that all are isomorphic):

(1) First of all define C := R2 to be the real plane. For any two complexnumbers z = (a, b) and w = (c, d) we define the operations(

a

b

)+

(c

d

):=

(a+ c

b+ d

)(a

b

)·(c

d

):=

(ad− bcac+ bd

)The advantage of this construction is, that it is purely elementary.The disadvantage is that one has to check that (C,+, ·) truly becomesa field under these operations (which we leave as an excercise). Weonly present the inverse of (0, 0) 6= z = (a, b) ∈ C. Define the complexconjugate of z to be z := (a,−b) and let ν(z) := zz = a2 + b2 ∈ R,then z−1 = (aν(z)−1,−bν(z)−1). Let us now denote 1 = (1, 0) andi = (0, 1) ∈ C, then it is clear, that 1 is the unit element of C andthat i2 = −1. Further any z = (a, b) ∈ C can be written uniquely, asz = a1 + bi. It is costumary to write z = a+ ib for this however.

1.4 Examples 41

Page 42: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2) (♦) The next construction will imitate the one in (1) - yet it realizes(a, b) as a real (2 × 2)-matrix. This has the advantage, that additionand multiplication are well-known for matrices

C :=

(a b−b a

) ∣∣∣∣ a, b ∈ R

It is straighforward to see that C thereby becomes a field under theaddition and multiplication of matrices, that coincides with the oneintroduced in (1) (check this, it’s neat). Note that thereby the de-terminant plays the role of ν(z) = det(z) and the transposition ofmatrices is the complex conjugation z = z∗.

(3) (♦) The third method implements the idea that i is introduced in sucha way as to i2 = −1. We simply force a solution of t2 + 1 = 0 into C

C := R[t]/(t2 + 1)R[t]

Comparing this construction with the one in (1) we have to identify(a, b) with a + bt + (t2 + 1)R[t]. In particular i = t + (t2 + 1)R[t].Then it again is easy to see, that addition and multiplication coincideunder this identification. The advantage of this construction is that itfocusses on the idea of finding a solution to t2 +1 = 0 (namely i), thatC immediately is a field (as t2 + 1 is prime, R[t] is a PID and hence(t2 + 1)R[t] is maximal). The disadvantage is that it already requiresthe algebraic machinery, which remains to be introduced here.

(1.33) Example: (♦)Now fix any square-free 0 6= d ∈ Z (d being sqare-free means that there is noprime number p ∈ Z such that p2 divides d). Then we define the followingsubset of the (reals or) complex numbers

Z[√d] :=

a+ b

√d | a, b ∈ Z

Note that Z[

√d] is contained in the reals if and only if 0 < d, and in the

case of d < 0 we have√d = i

√−d ∈ C. In any case Z[

√d] becomes a

commutative ring under the addition and multiplication inherited from C

(a+ b√d) + (e+ f

√d) = (a+ e) + (b+ f)

√d

(a+ b√d) · (e+ f

√d) = (ae+ dbf) + (af + be)

√d

It will be most interesting to see how the algebraic porperties of these ringsvary with d. E.g. for d ∈ −2,−1, 2, 3 Z[

√d] will be an Euclidean domain

under the Euclidean function ν(a + b√d) := |a2 − db2|. Yet Z[

√d] will not

even be an UFD for d ≤ −3.

42 1 Groups and Rings

Page 43: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.34) Example:We have just introduced the complex numbers as a two-dimensional spaceR over the reals. The next (and last sensibly possible, cf. to the theoremof Frobenius) step in the hirarchy leads us to the quaternions H. Theseare defined to be a four-dimensional, space H := R4 over the reals. Andanalogous to the above we specifically denote four elements

1 := (1, 0, 0, 0)

i := (0, 1, 0, 0)

j := (0, 0, 1, 0)

k := (0, 0, 0, 1)

That is any element z = (a, b, c, d) ∈ H can be written uniquely, in theform z = a + ib + jc + kd. The addition of elements of H is pointwiseand the multiplication is then given by linear expansion of the followingmultiplication table for these basis elements

1 i j k

1 1 i j ki i -1 k -jj j -k -1 ik k j -i -1

For z = a+ ib+ jc+ kd and w = p+ iq + jr + ks ∈ H these operations are

z + w = (a+ p) + i(b+ q) + j(c+ r) + k(d+ s)

zw = (ap− bq − cr − ds) + i(aq + bp+ cs− dr)+j(ar − bs+ cp+ dq) + k(as+ br − cq + dp)

Note that we may again define z := a − ib − jc − kd and thereby obtainν(z) = zz = a2 + b2 + c2 + d2 ∈ R such that any non-zero 0 6= z ∈ H has aninverse z−1 = ν(z)−1z = (aν(z)−1,−bν(z)−1,−cν(z)−1,−dν(z)−1). Yet Hclearly is not commutative, as ij = k = −ji. Thus the quaternions form askew-field, but not a field. Using the complex numbers C we may also realizethe quaternions as (2× 2)-matrices over C. Given a+ ib+ jc+ kd ∈ H wepick up the complex numbers z = a + ib and w = c + id ∈ C and identifya+ ib+ jc+ kd with a (2× 2)-matrix of the following form

H ∼=r

(z w−w z

) ∣∣∣∣ z, w ∈ C

(1.35) Example: (♦)Let (R,+, ·) be an arbitrary ring, we will later employ the opposite ring(R,+, ·)op of R. This is defined to be R as a set, under the same addition+ but with a multiplication that reverses the order of elements. Formally

(R,+, ·)op := (R,+, ) where a b := b · a

Note that thereby (R,+, ·)op is a ring again with the same properties as(R,+, ·) before. And if (R,+, ·) has been commutative, then the oppositering of R is just R itself (even as a set), i.e. (R,+, ·)op = (R,+, ·).

1.4 Examples 43

Page 44: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.36) Example:Sometimes it is helpful to introduce infinity as a number. In this examplewe want to present how this can be done formally. The advantage of thisconstruction is the formal ease by which infinity can be handled, the disad-vantage is that the extensions loose may important algebraic properties.

(i) Arithmetic in N := N ∪ ∞: the set N of natural numbers is well-know. We now pick up a new symbol ∞, which we call infinity, andextend the natural order on N by making ∞ a maximal element:

∀ k ∈ N : k <∞

Also we can extend the usual addition + and multiplication · on N bydefining the addition and multiplication with any a ∈ N to be

a+∞ = ∞+ a := ∞

a · ∞ = ∞ · a :=

∞ if a 6= 00 if a = 0

Thereby N doesn’t loose much properties of N, still (N,+) and (N, ·)are commutative monoids, and we also retain the distributivity lawbetween addition and multiplication (it is straightforward to check allthis), i.e. for any a, b and c ∈ N we get:

a · (b+ c) = a · b+ a · c(a+ b) · c = a · c+ b · c

(ii) (♦) Arithmetic in R := R∪∞,−∞: the field R of real numbers hasbeen introduced earlier in this section and is probably well-known tothe reader already. Again we pick up a new symbol ∞, which we callinfinity, and extend the natural order on R by making ∞ a maximaland −∞ a minimal element:

∀x ∈ R : −∞ < x and x <∞

Again we extend the usual addition + and multiplication · by definingthe addition and multiplication with ∞ resp. with −∞

∀ a ∈ R ∪ ∞ : a+∞ = ∞+ a := ∞∀ a ∈ R ∪ −∞ : a−∞ = −∞+ a := −∞

∀ a ∈ R \ 0 : a · ∞ = ∞ · a :=

∞ if a > 0−∞ if a < 0

∀ a ∈ R \ 0 : a · (−∞) = (−∞) · a :=

−∞ if a > 0∞ if a < 0

44 1 Groups and Rings

Page 45: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Note that we did neither define ∞ + (−∞) nor did we define 0 · ∞.Hence the set R is no algebraic structure, as the operations + and· are not fully defined! In fact any such definition would only resultin contradictions: Suppose we had ∞ −∞ = 0, as could have beenexpected, then since 1 · ∞ = ∞ = 2 · ∞ we had ∞ = (2 − 1) · ∞ =∞−∞ = 0. Hence ∞ = 0 wouldn’t be a new element. In conclusionwe just cannot define ∞ + (−∞) or 0 · ∞, these terms have to beevaded - that is they may not occur within a computation. This istho one and only reason why computations with infinity have to behandled with great care.

The reason behind this is an analytic one: we would like to write(n) → ∞, as the sequence (n) ⊆ R does not converge in R butincreases strictly monotonous. Yet the difference of two determineddivergent sequences can be just about anything: for any a ∈ R we get(n+ a)− (n) = (a)→ a, whereas (n2)− (n)→∞. Thus the result of∞−∞ is completely arbitrary, as anything might occur.

1.4 Examples 45

Page 46: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.5 First Concepts

Some CombinatoricsIn the following proposition we want to establish some basic computationalrules, that are valid in arbitrary rings. But in order to do this we first haveto recall some elementary facts from combinatorics: First consider a setcontaining 1 ≤ k ∈ N elements. If we want to put a linear order to this set,we have to choose some first element. To do this there are k possibilities. Wecommence by choosing the second element - now having (k−1) possibilities,and so on. Thus we end up with a number of linear orders on this set thatequals k! the faculty of k, that is defined to be

k! := # σ : 1 . . . k → 1 . . . k | σ bijective = k · (k − 1) · · · 2 · 1

If k = 0 we specifically define 0! := 1. Next we consider a set containingn ∈ N elements. If we want to select a subset I of precisely k ∈ 0 . . . nelements we have to start by choosing a first element (n possibilities). Inthe next step we may only choose from (n− 1) elements and so on. But asa subset does not depend on the order of its elements we still have to dividethis by k! - the number of linear orders on I. Altogether we end up with(

n

k

):= # I ⊆ 1 . . . n | #I = k

=n

k· n− 1

k − 1· · · n− k + 1

1

=n!

k! (n− k)!

Note that this definition also works out in the case n = 0 and k = 0 (in whichthe binomial coefficient (n k) equals one. That is we get (n 0) = 1 = (n n).Furthermore the binomial coefficents satisfy a famous recursion formula(

n+ 1

k

)=

(n

k

)+

(n

k − 1

)Finally consider some multi-index α = (α1, . . . , αk) ∈ Nk. Then it will beuseful to introduce the following notations (where n := |α|)

|α| := α1 + · · ·+ αk

α! := (α1!) · · · (αk!)(n

α

):=

n!

α!

46 1 Groups and Rings

Page 47: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.37) Proposition: (viz. 306)

• Let (R,+, ·) be any semi-ring and a, b ∈ R be arbitrary elements ofR. Then the following statements hold true

0 = −0

a0 = 0 = 0a

(−a)b = −(ab) = a(−b)(−a)(−b) = ab

• If now a1, . . . , am ∈ R and b1, . . . , nn ∈ R are finitely many, arbitraryelements of R, then we also obtain the general rule of distributivity(

m∑i=1

ai

) n∑j=1

bj

=

m∑i=1

n∑j=1

aibj

And if J(1), . . . , J(n) are finite index sets (where 1 ≤ n ∈ N) anda(i, j) ∈ R for any i ∈ 1 . . . n, j ∈ J(i) then we let J := J(1)×· · ·×J(n)(and j = (j1, . . . , jn) ∈ J) and thereby obtain

n∏i=1

∑ji∈J(i)

a(i, ji) =∑j∈J

n∏i=1

a(i, ji)

• Now suppose (R,+, ·) is a ring, n ∈ N and a, b ∈ R are elements, thatmutually commute (that is ab = ba), then we get the binomial rule

(a+ b)n =

n∑k=0

(n

k

)akbn−k

And if (R,+, ·) even is a commutative ring a1, . . . , ak ∈ R are finitelymany elements and n ∈ N then we even get the polynomial rule

(a1 + · · ·+ ak)n =

∑|α|=n

(n

α

)aα

where the above sum runs over all multi-indices α ∈ Nk that satisfy|α| = α1 + · · ·+ αk = n. And aα abbreviates aα := (a1)α1 · · · (ak)αk .

(1.38) Remark:Thus there are many computational rules that hold true for arbitrary rings.However there also are some exceptions to integer (or even real) arithmetic.E.g. we are not allowed to deduce a = 0 or b = 0 from ab = 0. As a coun-terexample regard a = 2+6Z and b = 3+6Z in Z6. A special case of this isthe following: we may not deduce a = 0 from a = −a. As a counterexampleregard a = 1 + 2Z in Z2.

1.5 First Concepts 47

Page 48: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.39) Remark:An exceptional role pays the zero-ring R = 0 , it clearly is a commutativering (under the trivial operations) that would even satisfy condition (F). Butit is the only ring in which 0 = 1. More formally we’ve got the eqivalency

R = 0 ⇐⇒ 0 = 1

[as =⇒ is clear and in the converse direction regard a = 1a = 0a = 0].This equality 0 = 1 will lead to some peculiarities however. This is preciselywhy we required 0 6= 1 for skew-fields and fields. And several theorems willrequire R to be not the zero-ring, which we will abbreviate by R 6= 0.

(1.40) Remark: (viz. 310)As a neat little application of the binomial rule let us present a generali-sation of the recursion formula for the binomial coefficients: regarding theexpression (x+y)n = (x+y)k(x+y)n−k in Z[x, y] and comparing coefficients,one easily verifies, that for any k, n ∈ N and any r ∈ 0 . . . k we get

r∑i=0

(k

i

)(n− kr − i

)=

(n

r

)

(1.41) Definition: (viz. 310)Let (R,+, ·) be any semi-ring and b ∈ R be any element of R. Then wedefine the set zdR of zero-divisors, the set nzdR of non-zero-divisors,the nil-radical nilR and the annulator ann(R, b) of b to be

zdR := a ∈ R | ∃ 0 6= b ∈ R : ab = 0 nzdR := a ∈ R | ∀ b ∈ R : ab = 0 =⇒ b = 0

nilR :=a ∈ R | ∃ k ∈ N : ak = 0

ann(R, b) := a ∈ R | ab = 0

An element a ∈ nilR contained in the nil-radical is also said to be nilpotent.If now R even is a ring we define the set R∗ of units (or invertible elements)and the set R• of relevant elements of R to be

R∗ := a ∈ R | ∃ b ∈ R : ab = 1 = ba R• := R \

( 0 ∪R∗

)And (R,+, ·) is said to be integral, iff it does not contain any non-zerozero-divisors, that is iff it satisfies one of the following equivalent statements

(a) zdR = 0

(b) for any a, b and c ∈ R such that a 6= 0 we get ab = ac =⇒ b = c

(b’) for any a, b and c ∈ R such that a 6= 0 we get ba = ca =⇒ b = c

(c) for any a, b ∈ R such that a 6= 0 we get ab = a =⇒ b = 1

(c’) for any a, b ∈ R such that a 6= 0 we get ba = a =⇒ b = 1

48 1 Groups and Rings

Page 49: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now (R,+, ·) is said to be an integral domain, iff it is a commutative ringthat also is integral. I.e. a commutative ring with zdR = 0.

(1.42) Remark:

• Consider any semi-ring (R,+, ·), two elements a, b ∈ R and a non-zerodivisor n ∈ nzdR. Then we obtain the equivalence

a = b ⇐⇒ na = nb

Prob ” =⇒ ” is clear, and if na = nb then n(a− b) = 0 which impliesa− b = 0 (and hence a = b), since n is a non-zero-divisor.

• Let (R,+, ·) be a ring, a ∈ R be any element and n ∈ nzdR be anon-zerodivisor. Then we obtain the implication

na = 1 =⇒ an = 1

Prob since n(an) = (na)n = n and hence n(an − 1) = 0. But as n isa non-zerodivisor this implies an− 1 = 0 and hence an = 1.

• Consider a commutative ring (R,+, ·) and a1, . . . , an ∈ R finitely manyelements. Now let u := a1 . . . an ∈ R, then we obtain the implication

u ∈ R∗ =⇒ a1, . . . , an ∈ R∗

Prob choose any j ∈ 1 . . . n and let aj :=∏i 6=i ai, then by construction

we get 1 = u−1(a1 . . . an) = (u−1aj)aj . And hence we see aj ∈ R∗.

• If (R,+, ·) is a skew-field, then R already is integral (i.e. zdR = 0 )and allows division: that is for any a, b ∈ R with a 6= 0 we get

∃ ! u, v ∈ R : ua = b and av = b

Prob consider a 6= 0, since R is a skew-field there is some i ∈ R suchthat ai = 1 = ia. Now suppose ab = 0, then b = (ia)b = i(ab) = 0and hence a ∈ nzdR. This proves zdR = 0 . Now consider a, b ∈ Rwith a 6= 0. Again choose i ∈ R such that ai = 1 = ia and let u := biand v := ib. Then ua = (bi)a = b(ia) = b and av = a(ib) = (ai)b = b.Now suppose u′, v′ ∈ R with ua = b = u′a and av = b = av′. Thena(v − v′) = 0 but as a 6= 0 we have seen that this implies v − v′ = 0and hence v = v′. Likewise (u − u′)a = 0 but as u 6= 0 this can onlybe, if u− u′ = 0 and hence u = u′.

The sets of zero-divisors zdR and of relevant elements R• rarely carry anynoteworthy algebraic structure. Yet the units R∗ form a group, the non-zero-divisors nzdR are multiplicatively closed, the nil-radical nilR is an(even perfect) ideal of R and the annulator ann(R, b) of some element b ∈ Ris a submodule. Though we have not introduced these structures yet, wewould like to present a formal proposition (and proof) of these facts already.

1.5 First Concepts 49

Page 50: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.43) Proposition: (viz. 309) (♦)

(i) Let (R,+, ·) be any semi-ring, then R is the disjoint union of its zero-divisors and non-zero divisors, formally that is

nzdR = R \ zdR

(ii) In any non-zero (R 6= 0) ring (R,+, ·) the nilradical is contained inthe set of zero-divisors. And the zero-divisors are likewise containedin non-units of R. That is we’ve got the inclusions

nilR ⊆ zdR ⊆ R \R∗

(iii) Let (R,+, ·) be any semi-ring, then the set nzdR ⊆ R of non-zero-divisors of R is multiplicatively closed, that is we get

1 ∈ nzdR

a, b ∈ nzdR =⇒ ab ∈ nzdR

(iv) Consider any ring (R,+, ·), then (R∗, ·) is a group (where · denotesthe multiplicaton of R), called the multiplicative group of R.

(v) If (R,+, ·) is any ring then R is a skew-field if and only if the multi-plicative group of R consists precisely of the non-zero elements

R skew-field ⇐⇒ R∗ = R \ 0

(vi) If (R,+, ·) is a commutative ring, then the nil-radical is an ideal of R

nilR i R

(vii) Let (R,+, ·) be any ring and b ∈ R an element of R. Then the annu-lator ann(R, b) is a left-ideal (i.e. submodule) of R

ann(R, b) ≤m R

(viii) Let (R,+, ·) be a commutative ring and u ∈ R∗ be a unit of R. Thenu+ nilR ⊆ R∗. To be precise, if a ∈ R with an = 0 then we get

(u+ a)−1 =

n−1∑k=0

(−1)kaku−k−1

50 1 Groups and Rings

Page 51: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.44) Example:We present a somewhat advanced example that builds upon some resultsfrom linear algebra. Let us regard the ring R := mat2(Z) of (2×2)-matricesover the integers. We will need the notions of determinant and trace

det

(a bc d

):= ad− bc ∈ Z

tr

(a bc d

):= a+ d ∈ Z(

a bc d

)]:=

(d −b−c a

)Then we can explictly determine which matrices are invertible (that is units),which are zero-divisors and which are nilpotent. We find

R∗ = A ∈ R | detA = ±1 zdR = A ∈ R | detA = 0 nilR = A ∈ R | detA = trA = 0

(♦) Prob first note that an easy computation yields AA] = (detA) 11 = A]A.Hence if A is invertible, then A−1 = (detA)−1A]. Thus A is invertible ifand only if detA is a unit (in Z), but as the units of Z are given to beZ∗ = ±1 this already is the first identity. For the second identity webegin with detA = 0. Then AA] = (detA) 11 = 0 and hence A is a zero-divisor. Conversely suppose that A ∈ zdR is a zero-divisor, then AB = 0for some B 6= 0 and hence (detA)B = A]AB = 0. But as B is non-zerothis implies detA = 0. Finally if detA = 0 then an easy computation showsA2 = (trA)A, thus if also trA = 0 then A2 = 0 and hence A ∈ nilR. Andif conversely A ∈ nilR ⊆ zdR, then detA = 0 and hence (by induction onn) An = (trA)n−1A. As A is nilpotent there is some 1 ≤ n ∈ N such that0 = An = (trA)n−1A and hence trA ∈ nilZ = 0 .

1.5 First Concepts 51

Page 52: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.6 Ideals

(1.45) Definition:

• Let (R,+, ·) be a semi-ring and P ⊆ R a subset. Then P is said tobe a sub-semi-ring of R (abbreviated by P ≤s R), iff

(1) 0 ∈ P

(2) a, b ∈ P =⇒ a+ b ∈ P(3) a ∈ P =⇒ −a ∈ P(4) a, b ∈ P =⇒ ab ∈ P

• And if (R,+, ·) is a ring having the unit element 1, then a subsetP ⊆ R is called a sub-ring of R (abbreviated by P ≤r R), iff

(S) P ≤s R

(5) 1 ∈ R

• Finally if (R,+, ·) even is a (skew)field, then a subset P ⊆ R is saidto be a sub-(skew)field of R (abbreviated P ≤f R), iff

(R) P ≤r R

(6) 0 6= a ∈ P =⇒ a−1 ∈ P

• Let (R,+, ·) be a semi-ring again, then a subset a ⊆ R is said to be aleft-ideal (or submodule) of R (abbreviated by a ≤m R), iff

(1) 0 ∈ a

(2) a, b ∈ a =⇒ a+ b ∈ a

(3) a ∈ a =⇒ −a ∈ a

(4) a ∈ a, b ∈ R =⇒ ba ∈ a

And a ⊆ R is said to be an ideal of R (abbreviated by a i R), iff

(M) a ≤m R

(5) a ∈ a, b ∈ R =⇒ ab ∈ a

• We will sometimes use the set of all subrings (of a ring R), the set ofall sub(skew)fields (of a (skew)field R), resp. the set of all (left-)ideals(of a semi-ring R) - these will be denoted, by

subrR := P ⊆ R | P ≤r R subfR := P ⊆ R | P ≤f R

submR := a ⊆ R | a ≤m R idealR := a ⊆ R | a i R

52 1 Groups and Rings

Page 53: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.46) Remark:

• Consider any semi-ring (R,+, ·) and a sub-semi-ring P ≤s R of R.Then we would like to emphasise, that the notion of a sub-semi-ringwas defined in such a way that the addition + and multiplication · ofR induces the like operations on P

+∣∣P

: P × P → P : (a, b) 7→ a+ b

·∣∣P

: P × P → P : (a, b) 7→ ab

And thereby (P,+∣∣P, ·∣∣P

) becomes a semi-ring again (clearly all theproperties of such are inherited from R). And in complete analogywe find that subrings of rings are rings again and sub(skew)fields of(skew)fields are (skew)fields again, under the induced operations.

Nota as +∣∣P

and ·∣∣P

are just restrictions (to P ×P ) of the respective

functions + and · on R, we will not distinguish between +∣∣P

and +

respectively between ·∣∣P

and ·. That is we will speak of the semi-ring (P,+, ·), where + and · here are understood to be the restrictedoperations +

∣∣P

and ·∣∣P

respectively.

• Beware: it is possible, that a sub-semi-ring P ≤s R is a ring (P,+, ·)under the induced operations, but no subring of R. E.g. considerR := Z× Z under the pointwise operations and P := Z× 0 ⊆ R.Then P ≤s R is a sub-semi-ring. But it is no subring, as the unitelement (1, 1) ∈ R of R is not contained (1, 1) 6∈ P in P . NeverthelessP has a unit-element, namely (1, 0) ∈ P and hence is a ring.

• If (R,+, ·) is a ring, then property (3) of left-ideals is redundant, italready follows from property (4) by letting b := −1.

• Let (R,+, ·) be a semi-ring, then trivially any left-ideal a of R alreadyis a sub-semi-ring. Formally put for any subset a ⊆ R we get

a ≤m R =⇒ a ≤s R

• Consider a commutative semi-ring (R,+, ·), then the property (4) ofleft-ideals already implies property (5) of ideals, due to commutativity.That is in a commutative semi-ring R we find the following equivalencefor any subset a ⊆ R

a ≤m R ⇐⇒ a i R

• Now let (R,+, ·) be any ring and let a ≤m R be a left-ideal of R.Then we clearly obtain the following equivalence

a = R ⇐⇒ 1 ∈ a

Prob ” =⇒ ” is clear, as 1 ∈ R and if converely 1 ∈ a then due toproperty (4) we have a = a1 ∈ a for any a ∈ R. Hence R ⊆ a ⊆ R.

1.6 Ideals 53

Page 54: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Consider any semi-ring (R,+, ·) and some element a ∈ R. Then weobtain a left-ideal Ra in R (called the principal ideal of a), by letting

Ra := ba | b ∈ R ≤m R

Prob clearly 0 = 0a ∈ Ra and if ba and ca ∈ R, then we see thatba + ca = (b + c)a ∈ Ra and −(ba) = (−b)a ∈ Ra. Finally we havec(ba) = (cb)a ∈ Ra for any c ∈ R.

• In a commutative semi-ring (R,+, ·) (where a ∈ R again) we clearlyget Ra = aR, where the latter is defined to be

aR := ab | b ∈ R i R

And as R is commutative Ra = aR already is an ideal of R. And it iscustomary to regard aR instead of Ra in this case. We will later see(cf. section 2.6), that Z is a PID - that is every ideal of Z is of theform aZ for some a ∈ Z. Formally that is

idealZ = aZ | a ∈ Z

• In any semi-ring (R,+, ·) we have the trivial ideals 0 and R i R.If R is a skew-field, then these even are the the only ideals of R.

R skew-field =⇒ submR = 0 , R

Prob let a ≤m R be any left-ideal of R, if there is some 0 6= a ∈ athen a−1 ∈ R and hence 1 = a−1a ∈ a. And from this we get a = R.

• Now consider some non-zero (R 6= 0) commutative ring (R,+, ·). Thenwe even get: R is a field if and only if the only ideals of R are thetrivial ones. That is equivalent are

R field ⇐⇒ idealR = 0 , R

Prob ” =⇒ ” has already been shown above, for ” ⇐= ” regard any0 6= a ∈ R, then aR i R is an ideal of R. But as aR 6= 0 this onlyleaves aR = R and hence there is some i ∈ R such that ai = 1, whichmeans i = a−1. Altogether R is a field.

• We could have introduced a dual notion of a left-ideal - a so-calledright-ideal. This is a subset a ⊆ R such that the properties (1), (2),(3) and (5) of (left-)ideals hold true. That is we have replaced (4) by(5). However this would lead to a completely analogous theory. Alsorefer to section 3.1 for a little more comments on this.

(1.47) Lemma: (viz. 318)For the moment being let ? abbreviate any one of the words semi-ring, ring,skew-field or field and let (R,+, ·) be a ?. Further consider an arbitraryfamily (where i ∈ I 6= ∅) Pi ⊆ R of sub-?s of R. Then the intersection Pi isa sub-? of R again ⋂

i∈IPi ⊆ R is a sub-?

54 1 Groups and Rings

Page 55: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Likewise let us abbreviate by ? the word left-ideal or the word ideal. Nowsuppose (R,+, ·) is a semi-ring and consider an arbitrary family (wherei ∈ I 6= ∅) ai ⊆ R of ?s of R. Then the intersection of the ai is a ? of Ragain ⋂

i∈Iai ⊆ R is a ?

The proof of this lemma is an easy, straightforward verification of the prop-erties of a (sub-)? in all the seperate cases. But though this is most easyit has far-reaching consequences. Whenever we are given a subset X ⊆ Rwe may consider the smallest ? of R containing X. Just take the intersec-tion over all ?s containing X (this is possible since at least R is a ? withX ⊆ R). The following definition formalizes this idea. And it is not muchof a surprise, that this smallest ? then can be described explictly by takingall the elements of X and performing all the operations allowed for a ?.

(1.48) Definition:Let now (R,+, ·) be a semi-ring and consider an arbitrary subset X ⊆ Rof R. Then the above lemma allows us to introduce the sub-semi-ring,left-ideal, resp. ideal generated by X to be the intersection of all suchcontaining X

〈X 〉s :=⋂P ⊆ R | X ⊆ P ≤s R

〈X 〉m :=⋂ a ⊆ R | X ⊆ a ≤m R

〈X 〉i :=⋂ a ⊆ R | X ⊆ a i R

Likewise if R even is a ring (or (skew)field) we may define the subring (orsub-(skew)field respectively) to be the intersection of all such containing X

〈X 〉r :=⋂P ⊆ R | X ⊆ P ≤r R

〈X 〉f :=⋂P ⊆ R | X ⊆ P ≤f R

Nota in case that there may be any doubt concerning the semi-ring X iscontained in (e.g. X ⊆ R ⊆ S) we emphasise the semi-ring R used in thisconstruction by writing 〈X ⊆ R 〉i or 〈X 〉i i R or even R〈X 〉i.

(1.49) Remark:In case X = ∅ it is clear that 0 is an ideal containing X. And as 0 iscontained in any other ideal (even sub-semiring) of R we find

〈∅ 〉s = 〈∅ 〉m = 〈∅ 〉i = 0

The case of the subring resp. sub(skew)field generated by ∅ usually is lesstrivial however. Let us regard the case R = Q for example. As any subringhas to contain 1 and allow sums and negatives, we find that Z = 〈∅ 〉r ⊆ Q.And as a subfield even allows to take inverses we find Q = 〈∅ 〉f ⊆ Q.However in C we also get Q = 〈∅ 〉f ⊆ C only.

1.6 Ideals 55

Page 56: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Hence the intersection of all subrings resp. all sub(skew)fields deserves aspecial name, it is called the prime-ring resp. prime(skew)field of R

〈∅ 〉r =⋂P ⊆ R | P ≤r R

〈∅ 〉f =⋂P ⊆ R | P ≤f R

(1.50) Proposition: (viz. 319)Consider a ring (R,+, ·) and a non-empty subset ∅ 6= X ⊆ R. Then we cangive an explicit description of the (left)-ideal generated by X

〈X 〉m =

n∑i=1

aixi

∣∣∣∣∣ 1 ≤ n ∈ N, xi ∈ X, ai ∈ R

〈X 〉i =

n∑i=1

aixibi

∣∣∣∣∣ 1 ≤ n ∈ N, xi ∈ X, ai, bi ∈ R

Suppose R is any semi-ring and denote ±X := x | x ∈ X ∪−x | x ∈ X then we can also give an explicit description of the sub-semiring of R gen-erated by X (and if R is a ring also of the subring of R generated by X)

〈X 〉s =

m∑i=1

n∏j=1

xi,j

∣∣∣∣∣∣ 1 ≤ m,n ∈ N, xi,j ∈ ±X

〈X 〉r = 〈X ∪ 1 〉s

Finally suppose that R even is a field, refering to the generated subring wecan also give an explicit description of the subfield of R generated by X

〈X 〉f =ab−1 | a, b ∈ 〈X 〉r, b 6= 0

Ideals have been introduced into the thoery of rings, as computations withordinary elements of rings may have very strange results. Thus instead oflooking at a ∈ R one may turn to aR i R (in a commutative ring). Itturns out that it is possible to define the sum and product of ideals, as welland that this even has nicer properties that the ordinary sum and productof elements. Hence the name - aR is an ideal element of R. Nowadays idealshave become an indespensible tool in ring theory, as they turn out to bekernels of ring-homomorphisms.

56 1 Groups and Rings

Page 57: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.51) Proposition: (viz. 320)Consider any semi-ring (R,+, ·) and arbitrary ideals a, b and c i R. Thenthese induce further ideals of R by letting

a ∩ b :=a ∈ R | a ∈ a and a ∈ b

a + b :=

a+ b | a ∈ a, b ∈ b

a b :=

n∑i=1

aibi

∣∣∣∣∣ 1 ≤ n ∈ N, ai ∈ a, bi ∈ b

Note however that a ∪ b need not be an ideal of R. And thereby we havethe following chains of inclusions of ideals of R

a b ⊆ a ∩ b ⊆ a, b ⊆ a + b

(1.52) Proposition: (viz. 320)Let (R,+, ·) be any semi-ring, then the compositions ∩, + and · of idealsare associative, that is if a, b and c i R are ideals of R, then we get

(a ∩ b) ∩ c = a ∩ (b ∩ c)

(a + b) + c = a + (b + c)

(a b) c = a (b c)

Further ∩ and + are commutative, and if R is commutative then so is themultiplication · of ideals, formally again (with a and b i R)

a ∩ b = b ∩ a

a + b = b + a

a b = b a

Recall that the last equality need not hold true, unless R is commutative!The addition + and multiplication · of ideals even satisfies the distributivitylaws, that is for any ideals a, b and c i R we get

a (b + c) = (a b) + (a c)

(a + b) c = (a c) + (b c)

(1.53) Remark:It is clear that for any ideal a i R we get a ∩ R = a. Thus (idealR,∩) isa commutative monoid with neutral element R. Likewise we get a + 0 = afor any ideal a i R and hence (idealR,+) is a commutative monoid withneutral element 0. Finally aR = a = R a for any ideal a i R and hence(idealR, ·) is a monoid with neutral element R, as well. And if R is com-mutative, so is (idealR, ·). In this sense it is understood that a0 := R andai := a . . . a (i-times) for 1 ≤ i ∈ N.

1.6 Ideals 57

Page 58: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.54) Proposition: (viz. 321)Now consider a ring (R,+, ·) and two arbitrary subsets X and Y ⊆ R. Leta = 〈X 〉i and b = 〈Y 〉i be the ideals generated by these. Then the sum a+bis generated by the union X ∪ Y , formally that is

a + b = 〈X ∪ Y 〉i

And if R is commutative, then the product a b is generated by the pointwiseproduct X Y := xy | x ∈ X, y ∈ Y of these sets, formally again

a b = 〈X Y 〉i

(1.55) Remark: (viz. 321)

(i) Due to (1.54) it oftenly makes sense to regard sums a + b or productsa b of ideals. And in (1.61) we will also the notion b/a of a quotientof ideals. Thus one might also be led to think that there should besomething like a difference of ideals. A good candidate would be thefollowing: consider the ideals a ⊆ b i R in some semi-ring (R,+, ·).Then we might take 〈b \ a 〉i to be the difference of b minus a. Yet thisyields nothing new, since

a ⊂ b =⇒ b = 〈b \ a 〉i

(ii) Let (R,+, ·) be a ring again and a1, . . . , an i R be a finite family ofideals of R. Then using (1.54) and induction on n it is clear that

n∑i=1

ai :=

n∑i=1

ai

∣∣∣∣∣ ai ∈ ai

= 〈

n⋃i=1

ai 〉i

This property inspires us to define the sum of an arbitrary family(i ∈ I) of ideals ai i R of R. As it is not clear what we shouldsubstitute for the finite sum of elements we instead define∑

i∈Iai := 〈

⋃i∈I

ai 〉i

(iii) And thereby it turns out that the infinite sum consists of arbitarilylarge but finite sums over elements of the ai, formally that is

∑i∈I

ai =

∑i∈I

ai

∣∣∣∣∣ ai ∈ ai, # i ∈ I | ai 6= 0 <∞

=

∑i∈Ω

ai

∣∣∣∣∣ ai ∈ ai, Ω ⊆ I, #Ω <∞

=

n∑k=1

ak

∣∣∣∣∣ n ∈ N, i(k) ∈ I, ak ∈ ai(k)

58 1 Groups and Rings

Page 59: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) Let now b i R be another ideal of R, then the distributivity rule of(1.52) generalizes to the case of infinite sums, that is(∑

i∈Iai

)b =

∑i∈I

(aib)

(1.56) Proposition: (viz. 322)Let (R,+, ·) be a commutative ring and consider the ideals a, b, a1, . . . , akand b, . . . , bk i R of R (where 1 ≤ k ∈ N). Then we obtain the statements

(i) a and b are said to be coprime iff they satisfy one of the followingthree equivalent conditions

(a) a + b = R

(b) (1 + a) ∩ b 6= ∅

(c) ∃ a ∈ a, ∃ b ∈ b such that a+ b = 1

(ii) Suppose that a and b are coprime and consider any i, j ∈ N, then the

powers ai and bj

are coprime as well, formally that is

a + b = R =⇒ ai + bj

= R

(iii) Suppose that a and bi (for any i ∈ 1 . . . k) are coprime, then a andb1 . . . bk are coprime as well, formally this is(

∀ i ∈ 1 . . . k : a + bi = R)

=⇒ a + (b1 . . . bk) = R

(iv) Suppose that the ai are pairwise coprime, then the product of the aiequals their intersection, formally that is(

∀ i 6= j ∈ 1 . . . k : ai + aj = R)

=⇒ a1 ∩ · · · ∩ ak = a1 . . . ak

(1.57) Definition: (viz. 323)Let (R,+, ·) be any semi-ring and a ≤m R be a left-ideal of R, then ainduces an equivalence relation ∼ on R by virtue of

a ∼ b :⇐⇒ a− b ∈ a

where a, b ∈ R. And the equivalence classes under this relation are calledcosets of a, and for any b ∈ R this is given to be

b+ a := [b] = b+ a | a ∈ a

Finally we denote the quotient set of R modulo a (meaning the relation ∼)

R/a := R/

∼ = a+ a | a ∈ R

1.6 Ideals 59

Page 60: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

If a i R even is an ideal (e.g. when R is commutative) then R/a can evenbe turned into a semi-ring (R/a,+, ·) again under the following operations

(a+ a) + (b+ a) := (a+ b) + a

(a+ a) · (b+ a) := (a · b) + a

Thereby (R/a,+, ·) is also called the quotient ring or residue ring of Rmodulo a. Clearly, if R is commutative, then so is R/a. And if R is a ring,so is R/a, where the unit element is given to be the coset 1 + a.

(1.58) Remark:It is common that beginners in the field of algebra encounter problems withthis notion. But on the other hand the construction of taking to the quotientring is one of the three most important tools of algebra (the others beinglocalizing and taking tensor products). Hence we would like to append afew remarks:

We have defined R/a to be the quotient set of R under the equivalencerelation a − b ∈ a. And the equivalence classes have been the cosets b + a.Hence two cosets are equal if and only if their representatives are equivalentand formally for any a, b ∈ R this is

a+ a = b+ a ⇐⇒ a− b ∈ a

And if a iR is an ideal of R we even were able to turn this quotient set R/ainto a ring again, under the operations above. Now don’t be confused, wesometimes also employ the pointwise sum A + B := a+ b | a ∈ A, b ∈ B and product A · B := ab | a ∈ A, b ∈ B of two subsets A, B ⊆ R. Butthis a completely different story - the operations on cosets b + a have beendefined by referring to the representatives b.

(1.59) Example: (viz. 325)Let 1 ≤ n ∈ N be a positive integer and consider the ideal nZ i Z. Thenwe wish to present an example of the above remark: If a and b ∈ Z are anytwo integers, then the following three statements are equivalent

(a) a+ nZ = b+ nZ

(b) a mod n = b mod n

(c) there is some k ∈ Z such that b = a+ kn

(1.60) Remark:Intuitively speaking the residue ring R/a is the ring R where all the informa-tion contained in a has been erased. Thus R/a only retains the informationthat has been stored in R but not in a. Hence it is no wonder that R/a is”smaller” than R (|R/a| ≤ |R| to be precise). However R/a is not a subringof R! Let us study some special cases to enlighten things:

• In the case a = R the residue ring R/R contains precisely one element,namely 0 +R. Hence R/R = 0 +R is the zero-ring. This matchesthe intuition all information has been forgotten, what remains carriesthe trivial structure only.

60 1 Groups and Rings

Page 61: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• The other extreme case is a = 0. In this case a induces the trivialrelation a ∼ b ⇐⇒ a = b. Hence the residue class of a is justa+ 0 = a. Therefore we have a natural identification

R ←→ R/0 : a 7→ a

Again this matches the intuition: 0 contains no information henceR/0 retains all the information of R. In fact we have just rewrittenany a ∈ R as a ∈ R/0.

• Now consider any commutative ring (R,+, ·) and build the polynomialring R[t] upon it. Then the ideal a = tR[t] allows us to regain R fromR[t]. Formally there is a natural identification

R ←→ R[t]/tR[t] : a 7→ at0 + tR[t]

Hence one should think that tR[t] contains the information we haveadded by going to the polynomial ring R[t]. Forgetting it we return toR. Further it is nice to note that the same is true for the formal powerseries R[[t]] instead of R[t]. The same amount of information that hasbeen added is lost.

Prob it is clear that R→ R[t] : a 7→ at0 is injective and for a 6= b ∈ Rwe even get at0 − bt0 = (a − b)t0 6∈ tR[t]. This means that evena 7→ at0+tR[t] is injective. And if we are given an arbitrary polynomialf = ant

n+· · ·+a1t+a0t0 then f+tR[t] = a0t

0 +tR[t] which also is thesurjectivity. Note that this all is immediate from the first isomorphismtheorem: R[t]→ R : f 7→ f [0] has kernel tR[t] and image R.

• Likewise (for any 1 ≤ n ∈ N) we obtain a natural identification be-tween the ring Zn (as defined in (1.27)) and the residue ring Z/nZvia the isomorphism

Z/nZ

∼→ : a+ nZ 7→ a mod n

This example teaches that the properties of the residue ring (hereZ/nZ) may well differ from those of the ring (here Z). E.g. Z is anintegral domain, whereas Z/4Z is not ((2 + 4Z)(2 + 4Z) = 0 + 4Z).On the other hand Z/2Z is a field, whereas Z is not.

Prob first of all Φ(a + nZ) = a mod n is well-defined and bjective,according to (1.59). The inverse map is just the canonial projection%(r) = r + nZ. For the homorphism property of Φ let us write (bydivision with remainder) a = pn+r and b = qn+s where r = a mod nand s = b mod n. Then a + b = (p + q)n + (r + s) and ab = (pqn +ps+ qr)n+ rs. Hence we can compute, using (1.59) again

Φ((a+ nZ) + (b+ nZ)) = (a+ b) mod n = (r + s) mod n

Φ((a+ nZ) · (b+ nZ)) = (ab) mod n = (rs) mod n

1.6 Ideals 61

Page 62: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

But by definition of Zn we have the compositions r+s = (r+s) mod nand rs = (rs) mod n hence Φ truly transfers the tructure of Z/nZonto Zn. In particular Zn truly is a commutative ring again.

(1.61) Lemma: (viz. 325) Correspondence TheoremConsider any semi-ring (R,+, ·) and an ideal a i R, then we obtain a 1-to-1correspondence between the ideals of the residue ring R/a and the ideals ofR containing a by virtue of (where b/a :=

b+ a | b ∈ b

)

idealR/a ←→

b i R | a ⊆ b

u 7→ b ∈ R | b+ a ∈ u

b/a ←[ b

And this correspondence even is compatible with intersections, sums andproducts. That is if consider any two ideals b and c i R with a ⊆ b ∩ c

b/a ∩

c/a = b ∩ c/

ab/a + c/

a = b + c/a

b/ac/a = b c/

a

Finally the correspondence also is compatible with the generation of ideals.That is consider an arbitrary subset X ⊆ R and denote its set of residueclasses by X/a := x+ a | x ∈ X . Then we obtain the identity

〈X 〉i + a/a = 〈X

/a 〉i

(1.62) Remark:The following graphic is supposed to illustrate the situation: going from Rto R/a all the ideals below a are deleted, while those ideals b i R witha ⊆ b are preserved. Those ideals that were not comparable to a are lostuncontrollably.

aa ⊆ b

a ⊄ b

R/a

0

R

0other ideals erased

idealsb ⊆ a

62 1 Groups and Rings

Page 63: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.63) Remark:Let (R,+, ·) be a semi-ring and a, b i R be two ideals, such that a ⊆ b.As above let us denote the ideal of R/a induced by b by

b/a :=

b+ a | b ∈ b

i

R/a

Then it is easy to see that for any x ∈ R we even get the following equivalence

x ∈ b ⇐⇒ x+ a ∈ b/a

Prob if x ∈ b then x+ a ∈ b/a by definition. And if conversely x+ a ∈ b/athen there is some b ∈ b such that x+ a = b+ a. That is x− b ∈ a ⊆ b andhence x+ b = b+ b = 0 + b again. But the latter already means x ∈ b.

(1.64) Remark:We will soon introduce the notion of a homomorphism, an example of sucha thing is the following map that sends b ∈ R to its equivalence class b+ a

% : R R/a : b 7→ b+ a

It is clear that this mapping is surjective. And it also is clear that for anyideal b i R with a ⊆ b the image of b under % is given to be

%(b) =b+ a | b ∈ b

= b/

a

Hence the above correspondence of ideals is just given by % acting on ideals(i.e. on certain subsets of R). That is b 7→ %(b) and u 7→ %−1(u).

(1.65) Example:Let us give a well/known, neat little application of the theory: the usual wayof denoting integers a ∈ Z is using the decimal representation, e.g. 273 =2 · 100 + 7 · 10 + 3 · 1. Generally we write a = an . . . a1a0 iff ak ∈ 0 . . . 9 and

a =n∑k=0

ak · 10k

As 2 divides 10 we get a+ 2Z = a0 + 2Z and hence it is clear that 2 dividesa if and only if 2 divides a0 (the same holds true if we replace 2 by 5)

2 | a ⇐⇒ 2 | a0

Also it is customary to define the cross-sum of a to be the sum of the digitsak in the decimal representation of a, formally that is

φ(a) :=

n∑k=0

ak ∈ N

1.6 Ideals 63

Page 64: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

E.g. the cross sum of 273 is 2 + 7 + 3 = 12. Now, as 10 = 3 · 3 + 1 we have10+3Z = 1+3Z, and thereby 10k +3Z = (10+3Z)k = (1+3Z)k = 1+3Z.This enables us to compute

a+ 3Z =n∑k=0

ak · 10k + 3Z =n∑k=0

ak · (10k + 3Z)

=n∑k=0

ak · (1 + 3Z) =n∑k=0

ak + 3Z = φ(a) + 3Z

That is a + 3Z = φ(a) + 3Z and in particular we have 3 | a if and only ifφ(a) + 3Z = a+ 3Z = 0 + 3Z which is equivalent to 3 | φ(a) again. That iswe can check wether a is divisible by 3 just by checking wether the cross-sumφ(a) of a is divisible by 3. E.g. φ(273) = 12 is divisible by 3 and hence 273is divisible by 3, as well - in fact 273 = 3 · 91. We can even iterate the crosssummation, e.g. φ(φ(273)) = 3 which is easier to divide by 3. (Again, as10 + 9Z = 1 + 9Z, the same reasonings holds true if we replace 3 by 9)

3 | a ⇐⇒ 3 | φ(a)

(1.66) Remark: (♦)Let now (R,+, ·) be a commutative ring, then the above correspondence ofideals interlocks maximal ideals, prime ideals and radical ideals:

idealR/a ←→

b ∈ idealR | a ⊆ b

∪ ∪

sradR/a ←→

b ∈ sradR | a ⊆ b

∪ ∪

specR/a ←→ p ∈ specR | a ⊆ p ∪ ∪

smaxR/a ←→ m ∈ smaxR | a ⊆ m

That is the ideal b iR with a ⊆ b is radical if and only if the correspondingideal b/a i R/a is radical, too. Likewise a ⊆ b i R is prime (maximal) iffb/a i R/a is prime (maximal). In fact we even have the following equalityof radical ideals for any b i R with a ⊆ b√

b/a =

√b/a

(1.67) Example: (♦)Consider the ring (Z,+, ·), as Z is an Euclidean domain all its ideals are ofthe form aZ for some a ∈ Z (confer to section 2.6 for a proof). We now fixsome ideal a := aZ, then aZ ⊆ bZ is equivalent, to the fact that b dividesa (see section 2.5 for details). Hence by the correspondence theorem theideals of Za = Z/aZ are preciely given to be

ideal(Z/aZ

)=

bZ/

aZ

∣∣∣ b | a

64 1 Groups and Rings

Page 65: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.7 Homomorphisms

(1.68) Definition:Let (R,+, ·) and (S,+, ·) be arbitrary semi-rings, a mapping ϕ : R → S issaid to be a homomorphism of semi-rings (or shortly semi-ring-homo-morphism) iff for any a, b ∈ R it satisfies the following two properties

(1) ϕ(a+ b) = ϕ(a) + ϕ(b)

(2) ϕ(a · b) = ϕ(a) · ϕ(b)

And if R and S even are rings having the unit elements 1R and 1S re-spectively, then ϕ is said to be a homomorphism of rings (or shortlyring-homomorphism iff for any a, b ∈ R it even satisfies the properties

(1) ϕ(a+ b) = ϕ(a) + ϕ(b)

(2) ϕ(a · b) = ϕ(a) · ϕ(b)

(3) ϕ(1R) = 1S

And we denote the set of all semi-ring-homomorphisms respectively of allring-homomorphisms from R to S (that is a subset of F(R,S)) by

shom(R,S) := ϕ : R→ S | (1) and (2) rhom(R,S) := ϕ : R→ S | (1), (2) and (3)

And if ϕ : R → S is a homomorphism of semi-rings, then we define itsimage imϕ ⊆ S and kernel knϕ ⊆ R to be the following subsets

im(ϕ) := ϕ(R) = ϕ(a) | a ∈ R kn(ϕ) := ϕ−1(0S) = a ∈ R | ϕ(a) = 0

(1.69) Remark:In the literature it is customary to only write hom(R,S) instead of whatwe denoted by shom(R,S) or rhom(R,S). And precisely which of these twosets is meant is determined by the context only, not by the notation. Inthis book we try to increase clarity by adding the letter ”s” or ”r”. Froma systematical point of view it might be appealing to write homs(R,S) forshom(R,S) (and likewise homr(R,S) for rhom(R,S)), as we have alreadyused these indices with substructures. Yet we will require the position of theindex later on (when it comes to modules and algebras). Hence we chose toplace precisely this index in front of ”hom” instead.

1.7 Homomorphisms 65

Page 66: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.70) Example:

• The easiest example of a homomorphism of semi-rings is the so-calledzero-homomorphism which we denote by 0 and which is given to be

0 : R→ S : a 7→ 0S

• Let (S,+, ·) be any semi-ring and R ≤s S be a sub-semi-ring of S.Then the inclusion R ⊆ S gives rise the the inclusion homomorphism

ι : R → S : a 7→ a

Clearly this is a homomorphism of semi-rings. And if S even is a ringand R ≤r S is a subring of S, then ι is a homomorphism of rings. Inthe special case R = S we call 11 := ι the identity map of R.

• Now let (R,+, ·) be any (semi-)ring and consider some ideal a i

R. Then we have already introduced the quotient (semi-)ring R/a in(1.57). And this gives rise to the following homomorphism of (semi-)rings (which is called canonical epimorphism)

% : R R/a : b 7→ b+ a

Note that the definition of the addition and multiplication on R/aare precisely the properties of % being a homomorphism. And also bydefinition the kernel of this map precisely is kn(%) = a. Of course % issurjective and hence its image is the entire ring im(%) = R/a.

• Consider any (semi-)ring (R,+, ·) again and let a, b i R be twoideals of R such that a ⊆ b. Then we obtain a well-defined, surjectivehomomorphism of (semi-)rings by (further note that the kernel of thismap is given to be the quotient ideal b/a)

R/a R/

b : a+ a 7→ a+ b

Prob if a+ a = b+ a then a− b ∈ a ⊆ b whence we get a+ b = b+ bagain. This already is the well-definedness and the surjectivity andhomorphism properties are clear. Further b+ b = 0 + b iff b ∈ a whichagain is equivalent, to b+ a ∈ b/a. And this also is kn(•) = b/a.

• Finally we would like to present an example of a homomorphism πof semi-rings, that is no homomorphism of rings. The easiest way toachieve this is by regarding the ring R = Z2. Then we may take

π : Z2 → Z2 : (a, b) 7→ (a, 0)

66 1 Groups and Rings

Page 67: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.71) Proposition: (viz. 334)

(i) Let (R,+, ·) and (S,+, ·) be semi-rings and ϕ : R→ S be a homomor-phism of semi-rings again. Then ϕ already satisfies (for any a ∈ R)

ϕ(0R) = 0S

ϕ(−a) = −ϕ(a)

(ii) And if (R,+, ·) and (S,+, ·) are rings, u ∈ R∗ is an invertible elementof R and ϕ : R → S is a homomorphism of rings, then ϕ(u) ∈ S∗ isan invertible element of S, too with inverse

ϕ(u−1) = ϕ(u)−1

(iii) Let (R,+, ·), (S,+, ·) and (T,+, ·) be (semi-)rings, ϕ : R → S andψ : S → T be homomorphisms of (semi-)rings. Then the compositionψ after ϕ is a homomorphism of (semi-)rings again

ψϕ : R→ S : a 7→ ψ(ϕ(a))

(iv) Let (R,+, ·) and (S,+, ·) be (semi-)rings and Φ : R→ S be a bijectivehomomorphism of (semi-)rings. Then the inverse map Φ−1 : S → R isa homomorphism of (semi-)rings, too.

(v) Let (R,+, ·) and (S,+, ·) be (semi-)rings and ϕ : R → S be a homo-morphism of (semi-)rings, then the image of ϕ is a sub-(semi-)ring ofS and its kernel is an ideal of R

im(ϕ) ≤s S

kn(ϕ) i R

(vi) More generally let (R,+, ·) and (S,+, ·) be any two (semi-)rings con-taining the sub-(semi-)rings P ≤s R and Q ≤s S respectively. If nowϕ : R→ S is a homomorphism of (semi-)rings, then ϕ(P ) and ϕ−1(Q)are sub-(semi-)rings again

ϕ(P ) ≤s S

ϕ−1(Q) ≤s R

(vii) Analogously let (R,+, ·) and (S,+, ·) be any two semi-rings containingthe (left-)ideals a ≤m R and b ≤m S respectively. If now ϕ : R→ S isa homomorphism of semi-rings then ϕ−1(b) is a (left-)ideal of R again.And if ϕ even is surjective, then ϕ(a) is a (left-)ideal of R

ϕ−1(b) ≤m R

ϕ surjective =⇒ ϕ(a) ≤m R

1.7 Homomorphisms 67

Page 68: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(viii) Let (R,+, ·) be a ring again and (S,+, ·) even be a skew field. Thenany non-zero homomorphism of semi-rings from R to S already is ahomomorphism of rings (i.e. satisfies property (3)). Formally

S skew-field =⇒ shom(R,S) = rhom(R,S) ∪ 0

(ix) Now let (R,+, ·) be a skew-field and (S,+, ·) be an arbitrary semi-ring,then any nonzero homomorphism ϕ : R→ S of semi-rings is injective

R skew-field =⇒ ϕ injective or ϕ = 0

(1.72) Proposition: (viz. 336)Let (R,+, ·) be an arbitrary semi-ring and a i R be any ideal in R. Thenwe regard the canonical epimorphism from R to R/a, that is

% : R→ R/a : r 7→ r + a

If now b i R is any other ideal in R then the image of b under % is givento be (a + b)/a and the preimage of this set are given to be a + b

%(b) = a + b/a

%−1(%(b)) = a + b

(1.73) Definition:Consider any two (semi-)rings (R,+, ·) and (S,+, ·) and a homomorphism ϕ :R→ S of (semi-)rings. Then ϕ is said to be a mono-, epi- or isomorphismof (semi-)rings, iff it is injective, surjective or bijective respectively. And ahomomorphism of the form ϕ : R → R is said to be an endomorphismof R. And ϕ is said to be an automorphism of R iff it is a bijectiveendomorphism of R. Altogether we have defined the notions

ϕ is called iff ϕ is

monomorphism injectiveepimorphism surjectiveisomorphism bijective

endomorphism R = Sautomorphism R = S and bijective

And if (R,+, ·) is any ring, then we denote the set of all ring automorphismson R by (note that this is a group under the composition of mappings)

raut(R) := Φ : R→ R | Φ bijective homomorphism of rings

In the case that Φ : R→ S is an isomorphism of semi-rings we will abbreviatethis by writing Φ : R ∼=s S. And if Φ : R → S even is an isomorphism ofrings we write Φ : R ∼=r S. And thereby R and S are said to be isomorphicif there is an isomorphism Φ between them. And in this case we wite

R ∼=s S :⇐⇒ ∃Φ : Φ : R ∼=s S

R ∼=r S :⇐⇒ ∃Φ : Φ : R ∼=r S

68 1 Groups and Rings

Page 69: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.74) Proposition: (viz. 336)

(i) Let (R,+, ·) and (S,+, ·) be semi-rings and ϕ : R→ S be a homomor-phism of semi-rings. Then ϕ is injective, resp. surjective iff

ϕ injective ⇐⇒ kn(ϕ) = 0 ϕ surjective ⇐⇒ im(ϕ) = S

(ii) Let (R,+, ·) and (S,+, ·) be (semi-)rings and Φ : R → S be a homo-morphism of (semi-)rings, then the following statements are equivalent(and in this case we already get α = β = Ψ = Φ−1)

(a) Φ is bijective

(b) there is some homomorphism Ψ : S → R of (semi-)rings suchthat ΨΦ = 11R and ΦΨ = 11 S

(c) there are homomorphisms α : S → R and β : S → R of (semi-)rings such that αΦ = 11R and Φβ = 11 S

(iii) Let (R,+, ·) and (S,+, ·) be any two semi-rings and Φ : R ∼=s S bean isomorphism between them. Then R is a ring if and only if S is aring. And in this case Φ : R ∼=r S already is an isomorphism of rings.

(iv) The relation ∼=r has the properties of an equivalence relation on theclass of all rings (and the same is true for ∼=s on the class of all semi-rings). To be precise, for all rings (R,+, ·), (S,+, ·) and (T,+, ·) weobtain the following statements

11R : R ∼=r R

Φ : R ∼=r S =⇒ Φ−1 : S ∼=r R

Φ : R ∼=r S and Ψ : S ∼=r T =⇒ ΨΦ : R ∼=r T

(1.75) Remark:Intuitively speaking two (semi-)rings (R,+, ·) and (S,+, ·) are isomorphicif and only if they carry precisely the same structure. A bit lax that isthe elements may differ by the names they are given but nothing more.Therefore any algebraic property satisfied by R also holds true for S andvice versa. We give a few examples of this fact below, but this is reallytrue for any algebraic property. Thus from an algebraic point of view,isomorphic (semi-)rings are just two copies the same thing (just like two 1Euro coins). And the difference between equality and isomorphy is just a set-theoretical one. We verify this intuititive view by presenting a few examples- let (R,+, ·) and (S,+, ·) be any two semi-rings that are isomorphic underΦ : R ∼=s S. Then we get

• In the above proposition we have already shown R is a ring if and onlyif S is a ring. And in this case already Φ is an isomorphism of rings.

1.7 Homomorphisms 69

Page 70: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• R is commutative if and only if S is commutative. Prob suppose thatR is commutative and consider any two elements x, y ∈ S. Then leta := Φ−1(x) and b := Φ−1(y) ∈ R and compute xy = Φ(a)Φ(b) =Φ(ab) = Φ(ba) = Φ(b)Φ(a) = yx. Hence S is commutative, too. Andthe converse implication follows with Φ instead of Φ−1.

• R is an integral domain if and only if S is an integral domain. Probsuppose that S is an integral domain and consider any two elementsa, b ∈ R. If we had ab = 0 then 0 = Φ(0) = Φ(ab) = Φ(a)Φ(b). But asS is an integral domain this means Φ(a) = 0 or Φ(b) = 0. And as Φ isinjective this implies a = 0 or b = 0. Hence R is an integral domain,too. And the converse implication follows with Φ−1 instead of Φ.

• R is a skew-field if and only if S is a skew-field. In fact for any tworings R and S we even get the stronger correspondence

Φ(R∗) = S∗

Prob if a ∈ R∗ is a unit of R then Φ(a−1) = Φ(a)−1 and hence Φ(a) ∈S∗ is a unit, too. And if conversely x ∈ S∗ is a unit of S, then leta := Φ−1(x) and b := Φ−1(x−1). Thereby ab = Φ−1(x)Φ−1(x−1) =Φ−1(xx−1) = Φ(1S) = 1R and likewise ba = 1R. Hence we havea−1 = b such that a ∈ R∗ and x ∈ Φ(R∗) is the image of a unit of R.

(1.76) Remark: (♦)In the chapters to come we will introduce the notion of a category (in ourcase this would be the collection of all semi-rings or of all rings respectively).And in any category there is the notion of monomorphisms, epimorphismsand isomorphisms. That is let (R,+, ·) and (S,+, ·) be two (semi-)rings.Then ϕ : R→ S will be called a monomorphism in the category of (semi-)rings, iff it is a homorphism of (semi-)rings, such that for any (semi-)ring(Q,+, ·) and any two homomorphisms α, β : Q → R of (semi-)rings wewould have the implication

ϕα = ϕβ =⇒ α = β

Likewise ϕ : R→ S will be called a epimorphism in the category of (semi-)rings, iff it is a homorphism of (semi-)rings, such that for any (semi-)ring(T,+, ·) and any two homomorphisms α, β : S → T of (semi-)rings we wouldhave the implication

αϕ = βϕ =⇒ α = β

Finally Φ : R→ S is said to be an isomorphism in the category of (semi-)rings, iff it is a homomorphism of (semi-)rings and there is another homo-morphism Ψ : S → R of (semi-)rings that is inverse to Φ, formally

∃Ψ : S → R : ΨΦ = 11R and ΦΨ = 11 S

70 1 Groups and Rings

Page 71: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

It is immediately clear that an injective homomorphism of (semi-)ringsthereby is a monomorphism in the categorial sense. And likewise a sur-jective homomorphism of (semi-rings) is an epimorphism in the categorialsense. The converse implications need not be true however. Yet we will see inthe subsequent proposition that the two notions of isomorphy are equivalent.

(1.77) Theorem: (viz. 337) Isomorphism Theorems

(i) Let (R,+, ·) and (S,+, ·) be any two rings, a i R be an ideal of Rand ϕ : R → S be a homomorphism of rings. If now a ⊆ kn(ϕ) thenwe obtain a well-defined homomorphism of rings by virtue of

ϕ : R/a→ S : b+ a 7→ ϕ(a)

Note if R and S are semi-rings and ϕ is a homomorphism of such, thenϕ still is a well-defined homomorphism of semi-rings.

(ii) First Isomorphism TheoremLet (R,+, ·) and (S,+, ·) be any two rings and ϕ : R → S be ahomomorphism of rings. Then the kernel of ϕ is an ideal kn(ϕ) i Rof R and the its image im(ϕ) ≤r S is a subring of S. And finally weobtain the following isomorphy of rings

R/kn(ϕ)

∼=r im(ϕ) : a+ kn(ϕ) 7→ ϕ(a)

Note if R and S are semi-rings and ϕ is a homomorphism of such, thenwe still get kn(ϕ) i R, im(ϕ) ≤s S and R/a ∼=s im(ϕ).

(iii) Second Isomorphism TheoremLet (S,+, ·) be a ring, R ≤r S be a subring of S and b i S bean ideal of S. Then b ∩ R i R is an ideal of R, the set b + R :=b+ a | b ∈ b, a ∈ R

≤r S is a subring of S and b i b + R is an

ideal of b+R. And thereby we obtain the following isomorphy of rings

R/b ∩R ∼=r

b +R/b : a+ b ∩R 7→ a+ b

Note if S is a semi-ring and R ≤s S is a sub-semi-ring of S, then stillb+R ≤s S, b ib+R and we retain the isomorphy R/b∩R ∼=s b+R/b.

(iv) Third Isomorphism TheoremLet (R,+, ·) be any ring containing the ideals a ⊆ b i R, thenb/a :=

b+ a | b ∈ b

i R/a is an ideal of the quotient ring R/a and

we obtain the following isomorphy of rings

R/a/b/a

∼=rR/

b : (a+ a) + b/a 7→ a+ b

Note if R is a semi-ring then everything remains the same, except thatthe above isomorphy is an isomorphy of semi-rings (and not of rings).

1.7 Homomorphisms 71

Page 72: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.8 Ordered Rings

In this section we will regard commutative rings that are subjected to a(as we call it) positive order. However it is completely independent of theother sections within this book. Its results will neither be used nor contin-ued elsewhere. But as it is a nice generalisation of the classical examples ofrings - foremost Z, as these carry a natural order - we chose to include itnevertheless. Enjoy.

(1.78) Definition:Let (R,+, ·) be a commutative ring, then a subset C ⊆ R is said to be apositive cone in R, iff it satisfies the following three properties

(C1) 0, 1 ∈ C

(C2) a, b ∈ C =⇒ a+ b ∈ C

(C3) a, b ∈ C =⇒ a · b ∈ C

A positive cone C ⊆ R is said to be proper iff it satisfies the followingfourth property (where −C := −a | a ∈ C )

(CP) C ∩ (−C) = 0

And a positive cone C ⊆ R is said to be total iff it satisfies another property

(CT) C ∪ (−C) = R

(1.79) Proposition: (viz. 340)

(i) Let (R,+, ·) be a commutative ring and (Ci) be a family (i ∈ I) ofpositive cones Ci ⊆ R of R, then their intersection is a positive conein R again ⋂

i∈ICi ⊆ R is a positive cone

(ii) Due to (i) we may define the positive cone generated by any subsetA ⊆ R to be the smallest positive cone containing A, that is

cone(A) :=⋂C ⊆ R | A ⊆ C, Cis a positive cone

(iii) Let both (R,+, ·) and (S,+, ·) be commutative rings and ϕ : R → Sbe a ring homomorphism, then we get the following implications:

C ⊆ R positive cone =⇒ ϕ(C) ⊆ S positive cone

D ⊆ S positive cone =⇒ ϕ−1(D) ⊆ R positive cone

(1.80) Example: (viz. 340)

(i) For any commutative ring (R,+, ·) the whole set R ⊆ R is a cone inR. In fact any subring S ⊆ R is a cone in R. The difference betweenthese two notions is that a subring S has to be a subgroup of theadditive group (R,+) of the ring, whereas a positive cone C ⊆ R onlyis s submonoid of (R,+). In fact subrings yield rather uninterestingpositive cones.

72 1 Groups and Rings

Page 73: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) For any commutative ring (R,+, ·) and any 1 ≤ k ∈ N we can interpretk as an element of R as kR := 1R + · · · + 1R (k-times) and likewisefor negative numbers: (−k)R := −(kR). Then we can regard the ringhomomorphism ζ : Z → R : k 7→ kR. The image of N under thishomomorphism is a positive cone in R

P := kR | k ∈ N

In fact P is the smallest positive cone of R at all [that is P = cone(∅)]and hence also called the prime cone of R. The homomorphism hasanother interesting property: the kernel of ζ is an ideal in Z and hencethere is an unique c ∈ N such that kn(ζ) = cZ. This c is called thecharacteristic of R and denoted by

char(R) := min k ∈ N | kR = 0

(iii) A special case of the above is N ⊆ Z, which obviously is a proper,total, positive cone. Note however that N ⊆ Q is a proper, positivecone, that is not total.

(iv) Another well-known example is R+ ⊆ C, which is another proper,positive, but not total cone.

(v) Generally let (R,+, ·) be a commutative ring under a fixed positiveorder ≤ (see below), then we obtain a proper, positive cone by letting

R+ := a ∈ R | a ≥ 0

(vi) If (R,+, ·) is any commutative ring, then we obtain a positive cone ∆on R (not necessarily proper or total) by letting

∆ :=a2

1 + · · ·+ a2n | n ∈ N, ak ∈ R

(vii) Let now (R,+, ·) be an integral domain under a fixed positive order ≤

(see below) and consider the polynomial ring R[x] over R. If 0 6= f ∈R[x] is any non-zero polynomial, then there are uniquely determinedk ≤ n ∈ N such that f is if the form:

f(x) = anxn + · · ·+ akx

k

with an 6= 0 and ak 6= 0. Thereby an is called the leading coefficient off(x) whereas ak is said to be the the minor coefficient of f(x) whichwe denote by

lc (f) := an

mc (f) := ak

1.8 Ordered Rings 73

Page 74: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And particularly for f = 0 we let lc (0) := 0 and mc (0) := 0. It iswell known that both of these maps are multiplicative and thereforewe obtain two proper, positive cones, by letting:

R[x]+ := f ∈ R[x] | lc (f) ≥ 0 R[x]− := f ∈ R[x] | mc (f) ≥ 0

Note that the order on R[x] induced by the cone R[x]+ is non-archi-median as for any 2 ≤ n ∈ N we get x < 1/n. This is not so with thecone R[x]− however.

(1.81) Definition:Let (R,+, ·) be a commutative ring and consider a partial order ≤ on R.That is ≤ is a subset of R×R satisfying the following three properties:

(O1) ∀ a ∈ R we have: a ≤ a

(O2) ∀ a, b ∈ R we have: if a ≤ b and b ≤ c then a ≤ c

(O3) ∀ a, b ∈ R we have: if a ≤ b and b ≤ a then a = b

Thereby we already used the customary notation a ≤ b for (a, b) ∈ ≤. Andas usual we write a < b to denote that a ≤ b but a 6= b. Now ≤ is said to bea positive order, iff for any a, b ∈ R we get

(P1) 0 ≤ 1

(P2) a ≤ b ⇐⇒ 0 ≤ b− a

(P3) 0 ≤ a, b =⇒ 0 ≤ a+ b

(P4) 0 ≤ a, b =⇒ 0 ≤ ab

A partial order is said to be total iff it further satisfies the following property

(T) ∀ a ∈ R we have a ≤ b or b ≤ a

As usual, in this case, we may define the minimum a∧ b and maximum a∨ bof a, b ∈ R to be a ∧ b := a if a ≤ b and a ∧ b := b if b ≤ a. Likewisea ∨ b := b if a ≤ b and a ∨ b := a if b ≤ a. And we can further define thesign sgn(a) ∈ R and absolute value |a| ∈ R of an element a ∈ R

sgn(a) :=

1 if a > 00 if a = 0−1 if a < 0

|a| :=

a if 0 ≤ a−a if a ≤ 0

74 1 Groups and Rings

Page 75: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.82) Remark:If ≤ is a positive order on the commutative ring (R,+, ·) then it alreadysatisfies the following, stronger properties for any a, b and h ∈ R

(P5) a < b ⇐⇒ 0 < b− a

(P6) a ≤ 0 ⇐⇒ 0 ≤ −a

(P7) a ≤ b =⇒ a+ h ≤ b+ h

(P8) a ≤ b, 0 ≤ h =⇒ ah ≤ bh

(P9) a ≤ b, h ≤ 0 =⇒ ah ≥ bh

Prob (P5) a < b means a ≤ b but a 6= b. By property (P2) we get 0 ≤ b− a.Suppose 0 = b − a, then a = b, which leaves 0 < b − a only. (P6) this isproperty (P2) applied to b = 0, (P7) if a ≤ b then 0 ≤ b−a = (b+h)−(a+h)and hence - by (P2) again - a+ h ≤ b+ h. (P8) if a ≤ b then 0 ≤ b− a andas also 0 ≤ h property (P4) yields 0 ≤ (b − a)h = bh − ah. Now by (P2)again ah ≤ bh. If h ≤ 0 then by (P6) we have 0 ≤ −h and hence by (P8)a(−h) ≤ b(−h) and hence −ah ≤ −bh. By adding ah + bh and using (P7)this yields bh ≤ ah.

(1.83) Proposition: (viz. 342)Let (R,+, ·) be a commutative ring and ≤ be a total, positive order on R.Then for any a, b ∈ R we get the following statements

(i) 0 ≤ |a|

(ii) a ≤ |a|

(iii) |a|2 = a2

(iv) |a| = sgn(a)a

Now suppose that R even is an integral domain, then we get the additionalproperties for any a, b ∈ R

(v) |ab| = |a||b|

(vi) |a+ b| ≤ |a|+ |b|

(vii) ||a| − |b|| ≤ |a− b|

(viii) sgn(ab) = sgn(a)sgn(b)

(ix) if a, b ≥ 0 then a ≤ b ⇐⇒ a2 ≤ b2

(1.84) Remark:

(i) Due to property (P2) it suffices to designate the positive elements ofR, that is those a ∈ R with 0 ≤ a. This fully defines the order ≤.Hence we denote the set of positive elements of R by R+, formally

R+ := a ∈ R | a ≥ 0

And as we have seen in the example (1.80) above, the set R+ is aproper positive cone of R.

1.8 Ordered Rings 75

Page 76: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) If ≤ is a total positive order on the commutative ring R, then thepositive cone R+ ⊆ R satisfies another property

∀ a ∈ R : a2 ∈ R+

Prob if a ≥ 0 then a2 = a ·a ≥ 0 is trivial by property (P4). Otherwise(≤ is total) if a ≤ 0 then by (P7) we get 0 ≤ −a and hence 0 ≤(−a)(−a) = a2 by (P2) again. Thus a2 ≥ 0 is true for any a ∈ R.

(iii) If R 6= 0 is a non-zero ring, that allows a positive order ≤, then Ralready is of characteristic 0, that is the following map is injective

Z→ R : k 7→ k 1R

Prob suppose there was some n ∈ Z such that n1R = 0, even thoughn 6= 0. Then we may take n ≥ 1 to be minimal (i.e. n generates thekernel nZ = kn(k 7→ k1R)). As 0R ≤ 1R we find 1R ≤ 2R and soon until (n − 1)R ≤ nR. Hence 0R ≤ 1R ≤ nR ≤ 0R, which yields0R = 1R, a contradiction to R 6= 0. Hence the kernel of k 7→ k1R iszero.

(iv) (♦) If R is a commutative ring, ≤ is a positive order on R and M is anR-module, then there is the notion of a cone C in M : that is a subsetC ⊆ M is said to be a cone in M , iff

(1) 0 ∈ C(2) for all x, y ∈ C we get x+ y ∈ C(3) for all x ∈ C and all a ≥ 0 we get ax ∈ C

If we consider R itself as an R-module then a positive cone C ⊆ Rin the ring R need not be a cone in the R-module R, however. As anexample let us regard R = R under the usual order ≤, then C = N ⊆R is a positive cone, but not a cone (in the sense here).

(1.85) Example: (viz. 341)

(i) If (R,+, ·) is a commutative ring and C ⊆ R is a proper positive conein R, then we obtain a positive order on R by letting:

a ≤ b :⇐⇒ b− a ∈ C

(ii) Let R := Z2 be the 2-dimensional lattice under the pointwise opera-tions, that is (a, b) + (c, d) := (a+ c, b+ d) and (a, b)(c, d) := (ac, bd).We install the following parial order on R:

(a, b) ≤ (c, d) :⇐⇒ a ≤ c and b ≤ d

Then ≤ is a positive order on R even though R is not an integraldomain.

(iii) The natural order 0 < 1 < 2 < 3 on R = Z4 is not positive, as wehave 2 ≤ 3 but 2 + 1 = 3 > 0 = 3 + 1.

76 1 Groups and Rings

Page 77: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1.86) Proposition: (viz. 341)Let (R,+, ·) be a commutative ring, then we obtain a 1-to-1 correspondencebetween the proper positive cones C in R and the positive orders ≤ on R,by virtue of

C ⊆ R | C proper positive cone ←→ ≤ ⊆ R×R | ≤ positive order C 7→ (a ≤ b :⇐⇒ b− a ∈ C)

R+ = a ∈ R | a ≥ 0 ←[ ≤

(1.87) Proposition: (viz. 343) (♦)Let (R,+, ·) be a commtative ring, ≤ be a positive order on R and let Ndenote the set of positive non-zero divisors of R, that is

N := n ∈ R | n ≥ 0 and ∀ a ∈ R : an = 0 =⇒ a = 0

Then N is multiplicatively closed and we denote the localisation of R in Nby Q := N−1R. Thereby we obtain a positive order on Q, by letting

a

m≤ b

n:⇐⇒ an ≤ bm

1.8 Ordered Rings 77

Page 78: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

1.9 Products

(1.88) Definition:Let ∅ 6= I be an arbitrary set and for any i ∈ I let (R,+, ·) be a semi-ring. Then we define the (exterior) direct product of the Ri to be just thecarthesian product of the sets Ri

∏i∈I

Ri :=

a : I →

⋃i∈I

Ri

∣∣∣∣∣ ∀ i ∈ I : a(i) ∈ Ri

Let us abbreviate this set by ΠR for a moment. Then we may turn thisset into a semi-ring (ΠR,+, ·) by defining the following addition + andmultiplication · for elements a and b ∈ ΠR

a+ b : I →⋃i∈I

Ri : i 7→ a(i) + b(i)

a · b : I →⋃i∈I

Ri : i 7→ a(i) · b(i)

Note that thereby the addition a(i) + b(i) and multiplication a(i)b(i) arethose of Ri. Further note that it is customary to write (ai) instead ofa ∈ ΠR, where the ai ∈ Ri are given to be ai := a(i). And using thisnotation the above definitions of + and · in ΠR take the more elegant form

(ai) + (bi) := (ai + bi)

(ai) · (bi) := (ai · bi)

For any i ∈ I let us now denote the zero-element of the semi-ring Ri by0i. Then we define the (exterior) direct sum of the Ri to be the subset ofthose a = (ai) ∈ ΠR for which only finitely many ai are non-zero. Formally

⊕i∈I

Ri :=

a ∈

∏i∈I

Ri

∣∣∣∣∣ # i ∈ I | a(i) 6= 0i <∞

Let us denote this set by ΣR for the moment, then ΣR ≤s ΠR is a sub-semi-ring. In particular we will always understand ΠR and ΣR as semi-ringsunder these compositions, without explictly mentionining it.

(1.89) Example:

• Probably the most important case is where only two (semi-)rings(R,+, ·) and (S,+, ·) are involved. In this case the direct productand direct sum coincide, to be the carthesian product

R⊕ S = R× S = (a, b) | a ∈ R, b ∈ S

78 1 Groups and Rings

Page 79: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And if we write out the operations on R ⊕ S explictly (recall thatthese were defined to be the pointwise operations) these simply readas (where a, b ∈ R and r, s ∈ S)

(a, r) + (b, s) = (a+ b, r + s)

(a, r)(b, s) = (ab, rs)

Thereby it is clear that the zero-element of R ⊕ S is given to be 0 =(0, 0). And if both R and S are rings then the unit element of R ⊕ Sis given to be 1 = (1, 1). It also is clear that R ⊕ S is commutativeiff both R and S are such. The remarks below just generalize theseobservations and give hints how this can be proved.

• Another familiar case is the following: consider a (semi-)ring (R,+, ·)and any non-empty index set I 6= ∅. In the examples of section 1.4 wehave already studied the (semi-)ring

RI :=∏i∈I

R = f | f : I → R

In the previous example as well as in this definition here we have takenthe pointwise operations. That is for two elements f , g : I → R wehave defined the operations

f + g : I → R = i 7→ f(i) + g(i)

fg : I → R = i 7→ f(i) · g(i)

And it is clear that the zero-element is given to be the constant func-tion 0 : I → R : i 7→ 0. And if R is a ring then we also have a unitelement given to be the constant function 1 : I → R : i 7→ 1.

• We continue with the example above. Instead of the direct product wemay also take to the direct sum of R over I. That is we may regard

R⊕I :=⊕i∈I

R = f : I → R | # supp(f) <∞

where supp(f) := i ∈ I | f(i) 6= 0. It is clear R⊕I ⊆ RI truly isa subset of the direct product. But be careful, as in in any non-zeroring R 6= 0 we have 1 6= 0 we find that for infinite index sets I the unitelement 1 : I → R : i 7→ 1 is not contained in R⊕I . That is R⊕I neednot be a ring, even if R is such.

(1.90) Remark:As above let ∅ 6= I be an arbitray index set and for any i ∈ I let (Ri,+, ·) bea (semi-)ring having the zero element 0i (and unit element 1i if any). Thenlet us denote the direct product of the Ri by ΠR and the direct sum of theRi by ΣR. Then we would like to remark the following

• It is clear that (ΠR,+, ·) truly becomes a (semi-)ring under the oper-ations above. And if we denote the zero-element of any Ri by 0i thenthe zero-element of ΠR is given to be 0 = (0i).

1.9 Products 79

Page 80: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Prob one easily checks all the porperties required for a semi-ring. Asan example we prove the associativity of the addition: let a = (ai),b = (bi) and c = (ci) ∈ ΠR, then ((a + b) + c)i = (a + b)i + ci =(ai + bi) + ci = ai + (bi + ci) = ai + (b+ c)i = (a+ (b+ c))i.

• Clearly ΠR is commutative if and only if Ri is commutative for anyi ∈ I. And ΠR is a ring if and only if Ri is a ring for any i ∈ I. In thelatter case let us denote the unit element of Ri by 1i. Then the unitelement of ΠR is given to be 1 = (1i).

Prob if any Ri is commutative, then for any a = (ai), b = (bi) ∈ ΠRwe get (ab)i = aibi = biai = (ba)i which is the commutativity of ΠR.And if ΠR is commutative, then any Ri is commutative, by the sameargument. If any Ri is a ring then for any a = (ai) ∈ ΠR we get(1a)i = 1iai = ai and (a1)i = ai1i = ai. And if ΠR is a ring, then anyRi is a ring, by the same argument.

• By definition ΣR ⊆ ΠR is a subset. But ΣR even is a sub-semi-ringΣR ≤s ΠR of ΠR. Yet ΣR need not be a subring of ΠR (even if anyRi is a ring), as (1i) need not be contained in ΣR.

Prob consider any a = (ai) and b = (bi) ∈ ΣR and let us denote theset S(a) := i ∈ I | ai 6= 0i . E.g. we get S(0) = ∅ and hence 0 ∈ ΣR.Further it is clear that S(a + b) ⊆ S(a) ∪ S(b), S(−a) = S(a) andS(ab) ⊆ S(a) ∩ S(b). In particular all these sets are finite again andhence a+ b, −a and ab ∈ ΣR, which had to be shown.

• Note that we get ΣR = ΠR if and only if I is a finite set. And in thecase of finitely many semi-rings R1, . . . , Rn we also use the slightly laxnotation (note that thereby R1 × · · · ×Rn = R1 ⊕ · · · ⊕Rn)

R1 × · · · ×Rn :=

n∏i=1

Ri

R1 ⊕ · · · ⊕Rn :=

n⊕i=1

Ri

• It is clear that the product of (semi-)rings is associative and com-mutative up to isomorphy. That is if we consider the (semi-)rings(R,+, ·), (S,+, ·) and (T,+, ·), then we obtain the following isomor-phies of (semi-)rings

R⊕ S ∼=s S ⊕R : (a, b) 7→ (b, a)(R⊕ S

)⊕ T ∼=s R⊕

(S ⊕ T

):((a, b), c

)7→(a, (b, c)

)Prob it is immediately clear that the above mappings are homomor-phisms of (semi-)rings by definition of the operations on the product.And likewise it is clear (this is an elementary property of carthesianproducts) that these mappings also are bijective.

• Note that ΠR usually does not inherit any nice properties from the Ri(except being commutative or a ring). E.g. Z is an integral domain,but Z2 = Z×Z contains the zero-divisors (1, 0) and (0, 1).

80 1 Groups and Rings

Page 81: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• For any j ∈ I let us denote the canonical projection from ΠR (or ΣRrespectively) by πj . Likewise let ιj denote the canonical injection ofRj into ΠR (or ΣR respectively). That is we let

πj : ΣR Rj : (ai) 7→ aj

ιj : Rj → ΣR : aj 7→(i 7→

aj if i = j0i if i 6= j

)Then it is clear that πi and ιj are homomorphisms of semi-rings sat-isfying πjιj = 11 j . In particular πj is surjective and ιj is injective.And we will oftenly interpret Rj as a sub-semi-ring of ΠR (or ΣRrespectively) by using the isomorphism ιj : Rj ∼=s im(ιj).

(1.91) Proposition: (viz. 338) (♦)Let (R,+, ·) and (S,+, ·) be any commutative rings and consider the idealsa i R and b i R. Then we obtain an ideal a⊕ b of R⊕ S by letting

a⊕ b :=

(a, b) | a ∈ a, b ∈ b

Conversely let us denote the canonical projections of R ⊕ S to R and Srespectively by % : R⊕ S R : (a, b) 7→ a and σ : R⊕ S S : (a, b) 7→ b.If now u i R⊕ S is an ideal then we get the identity

u = %(u)⊕ σ(u)

And thereby we obtain an explicit description of all ideals (respecively of allradical, prime or maximal) ideals of R⊕ S, to be the following

idealR⊕ S =a⊕ b |, a ∈ idealR, b ∈ idealS

sradR⊕ S =

a⊕ b |, a ∈ sradR, b ∈ sradS

specR⊕ S = p⊕ S | p ∈ specR ∪ R⊕ q | q ∈ specS

smaxR⊕ S = m⊕ S | m ∈ smaxR ∪ R⊕ n | n ∈ smaxS

(1.92) Theorem: (viz. 343)

(i) Consider any commutative ring (R,+, ·) and two elements e, f ∈ R.Further let us denote the respective principal ideals by a := eR andb := fR. If now e and f satisfy e + f = 1 and ef = 0 then we geta + b = R and a ∩ b = 0 , formally that is

e+ f = 1, ef = 0 =⇒ a + b = R, a ∩ b = 0

(ii) Now let (R,+, ·) be any (not necessarily commutative) ring and a,b i R be two ideals of R satisfying a+ b = R and a∩ b = 0 . Thenwe obtain the following isomorphy of rings

R ∼=rR/

a⊕R/

b : x 7→ (x+ a, x+ b)

1.9 Products 81

Page 82: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) Chinese Remainder TheoremLet (R,+, ·) be any commutative ring and consider finitely many idealsai i R (where i ∈ 1 . . . n) of R, that are pairwise coprime. That isfor any i 6= j ∈ 1 . . . n we assume

ai + aj = R

Let us now denote the intersection of all the ai by a := a1 ∩ · · · ∩ an.Then we obtain the following isomorphy of rings

R/a∼=r

n⊕i=1

R/ai : x+ a 7→ (x+ a1, . . . , x+ an)

(1.93) Remark:

• Combining (i) and (ii) we immediately see that, whenever we are giventwo elements e, f ∈ R in a commutative ring R, such that e + f = 1and ef = 0, then this induces a decomposition of R into two rings

R ∼=rR/eR⊕

R/fR : x 7→ (x+ eR, x+ fR)

• Clearly (ii) is just a special case of the chinese remainder theorem(iii). Nevertheless we gave a seperate formulation (and proof) of thestatement to be easily accessible. In fact (ii) talks about decomposingR into smaller rings, whereas (iii) is about solving equations:

• Suppose we are given the elements ai ∈ R (where i ∈ 1 . . . n). Thenthe surjectivity of the isomorphism in (iii) guarantees that there is asolution x ∈ R to the following system of congruencies

x+ a1 = a1 + a1

...

x+ an = an + an

Now let x be any solution of this system, then due to the isomorphhyin (iii) we also find that the set of all solutions is given to be x+ a.

• (♦) Let us consider an example of the above in the integers R := Z.Let a1 := 8Z and a2 := 15Z. As 8 and 15 are relatively prime we trulyfind a1 +a2 = Z. And also a = a1∩a2 = 120Z, since the least commonmultiple of 8 and 15 is 120. It is elementary to verify the followingtwo systems of congruencies

105 + 8Z = 1 + 8Z 16 + 8Z = 0 + 8Z105 + 15Z = 0 + 15Z 16 + 15Z = 1 + 15Z

Hence if we are given any a1, a2 ∈ Z then we let x := 105a1 +16a2 ∈ Zto solve the general system of congruencies

x+ 8Z = a1 + 8Z

x+ 15Z = a2 + 15Z

82 1 Groups and Rings

Page 83: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 2

Commutative Rings

2.1 Maximal Ideals

(2.1) Remark: (viz. 302)

(i) Let us first recall the definition of a partial order on some non-emptyset X 6= ∅. This is a relation ≤ on X [formally that is a subset of theform ≤ ⊆ X ×X] such that for any x, y and z ∈ X we get

x = y =⇒ x ≤ yx ≤ y, y ≤ z =⇒ x ≤ zx ≤ y, y ≤ x =⇒ x = y

[where we already wrote x ≤ y for the formal statement (x, y) ∈ ≤].And in this case we also call (X,≤) a partially odered set. And in thiscase it is also customary to use the notation (for any x, y ∈ X)

x < y :⇐⇒ x ≤ y and x 6= y

(ii) And ≤ is said to be a linear order on X (respectively (X,≤) iscalled a linearly ordered set), iff ≤ is a partial order such that any twoelements of X are comparable, formally

∀x, y ∈ X : x ≤ y or y ≤ y

And in this case we may define the maximum x ∨ y and minimum ofany two elements x, y ∈ X to me the following element of X

x ∧ y :=

x if x ≤ yy if y ≤ x x ∨ y :=

y if x ≤ yx if y ≤ x

(iii) Example: if S is any set (even S = ∅ is allowed), then we may takethe power set X := P(X) = A | A ⊆ S of X. Then the inclusionrelation ⊆ which is defined by (for any two A, B ∈ X)

A ⊆ B :⇐⇒ (∀x : x ∈ A =⇒ x ∈ B)

83

Page 84: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

is a partial order on S. However the inclusion ⊆ almost never is atotal order. E.g. regard S := 0, 1 . Then the elements 0 and 1 ∈ X of X are not comparable. In what follows we will usuallyconsider a set X ⊆ idealR ⊆ P(R) of ideals of a commutative ringR under the inclusion relation ⊆ .

(iv) Now let (X,≤) be a partially ordered set and A ⊆ X be a subset.Then we define the sets A∗ of minimal and A∗ of maximal elementsof A to be the following

A∗ := a∗ ∈ A | ∀ a ∈ A : a ≤ a∗ =⇒ a = a∗ A∗ := a∗ ∈ A | ∀ a ∈ A : a∗ ≤ a =⇒ a = a∗

And an element a∗ ∈ A∗ is said to be a minimal element of A. Likewisea∗ ∈ A∗ is said to be a maximal element of A. Note that in generalit may happen that A has several minimal (or maximal) elements oreven none at all. Example: N∗ = 0 and N∗ = ∅ under the usualorder ≤ on N.

(v) If (X,≤) is a linearly ordered set and A ⊆ X is a subset, then maximaland minimal elements of A are uniquely determined. Formally that is

a, b ∈ A∗ =⇒ a = b

a, b ∈ A∗ =⇒ a = b

(vi) Let again (X,≤) be a linearly ordered set and A = a1, . . . , an ⊆ Xbe a finite subset of X. Then there is a (uniquley) determined minimalelement A∗ (resp. maximal element a∗) of A. And this is given to be

A∗ = a∗ where a∗ =(

(a1 ∨ a2) . . .)∨ an

A∗ = a∗ where a∗ =(

(a1 ∧ a2) . . .)∧ an

(vii) Once again let (X,≤) be any partially ordered set and A ⊆ X be asubset of X. Then it is clear that ≤ induces another partial order ≤Aon A by restricting

≤A := (≤) ∩ (A×A)

And we will write ≤ for ≤A again, that is we write (A,≤) instead ofthe more correct form (A,≤A). Now a subset C ⊆ X is said to be achain in X iff (C,≤) is linearly ordered. Equivalently that is iff anytwo elements of C are comparable, formally

∀x, y ∈ C : x ≤ y or y ≤ x

(viii) Lemma of ZornLet (X,≤) be a partially ordered set (in particular X 6= ∅). Furthersuppose that any chain C of X has an upper bound u in X, formally

∀C ⊆ X : C chain =⇒ ∃u ∈ X ∀ c ∈ C : c ≤ u

84 2 Commutative Rings

Page 85: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Then X already contains a (not necessarily) unique maximal element

∃x∗ ∈ X ∀x ∈ X : x∗ ≤ x =⇒ x = x∗

Nota though it may come as a surprise the lemma of Zorn surely is thesingle most powerful tool in mathematics! We will see its impact onseveral occasions: e.g. the existence of maximal ideals or the existenceof bases (in vector-spaces). However we do ask the reader to refer tothe literature (on set-theory) for a proof. In fact the lemma of Zorn isjust one of the equivalent reformulations of the axiom of choice. Henceyou might as well the lemma of Zorn as a set-theoretic axiom.

(ix) In most cases we will use a quite simple form of the lemma of Zorn:consider an arbitrary collection of sets Z. Now verify that

(1) Z 6= ∅ is non-empty

(2) if C ⊆ Z is a chain in Z (that is C 6= ∅ and for any A, B ∈ C weget A ⊆ B or B ⊆ A) then we get

⋃C ∈ Z again

Then (by the lemma of Zorn) Z already contains a (not necessarilyunique) ⊆ -maximal element Z ∈ Z∗. That is for any A ∈ Z we get

Z ⊆ A =⇒ A = Z

(x) Of course the lemma of Zorn can also be applied to find minimalelements. In analogy tho the above we obtain the following specialcase: consider an arbitrary collection of sets Z. Now verify that

(1) Z 6= ∅ is non-empty

(2) if C ⊆ Z is a chain in Z (that is C 6= ∅ and for any A, B ∈ C weget A ⊆ B or B ⊆ A) then we get

⋂C ∈ Z again

Then (by the lemma of Zorn) Z already contains a (not necessarilyunique) ⊆ -minimal element Z ∈ Z∗. That is for any A ∈ Z we get

A ⊆ Z =⇒ A = Z

(2.2) Definition:Let (R,+, ·) be a commutative ring and ∅ 6= A ⊆ idealR be a non-emptyfamily of ideals of R. Then we recall the definiton of maximal a∗ ∈ A∗ andminimal a∗ ∈ A∗ elements (concerning the order ”⊆ ”) of A:

A∗ := a∗ ∈ A | ∀ a ∈ A : a∗ ⊆ a =⇒ a = a∗ A∗ := a∗ ∈ A | ∀ a ∈ A : a ⊆ a∗ =⇒ a = a∗

2.1 Maximal Ideals 85

Page 86: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.3) Definition:Let (R,+, ·) be a commutative ring, then m is said to be a maximal idealof R, iff the following three statements hold true

(1) m i R is an ideal

(2) m 6= R is proper

(3) m is a maximal element of idealR \R . That is for any ideal a i R

m ⊆ a =⇒ a = m or a = R

The set smaxR of all maximal ideals is called maximal spectrum of Rand the intersection of all maximal ideals is called Jacobson radical of R

smaxR := m ⊆ R | (1), (2) and (3) jacR :=

⋂smaxR

Nota in the light of this definition the maximal spectrum is nothing but themaximal elements of the set of non-full ideals: smaxR = (idealR \ R )∗.

(2.4) Lemma: (viz. 360)

(i) Let (R,+, ·) be a commutative ring and ∅ 6= A ⊆ idealR be a chainof ideals (that is for any a, b ∈ A we have a ⊆ b or b ⊆ a). Then theunion of all the a ∈ A is an ideal of R, formally

∅ 6= A ⊆ idealR chain =⇒⋃A i R ideal

(ii) Let R 6= a i R be a non-full ideal in the commutative ring (R,+, ·),then there is a maximal ideal of R containing a, formally

R 6= a i R =⇒ ∃m ∈ smaxR : a ⊆ m

(iii) In a commutative ring (R,+, ·) the group of units R∗ of R is preciselythe complement of the union of all maximal ideals of R, formally

R∗ = R \⋃

smaxR

(iv) The zero-ring is the one and only ring without maximal ideals, formally

R = 0 ⇐⇒ smaxR = ∅

86 2 Commutative Rings

Page 87: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.5) Proposition: (viz. 372)Let (R,+, ·) be a commutative ring and m i R be an ideal of R. Then thefollowing four statements are equivalent

(a) m is a maximal ideal

(b) the quotient ring R/m is a field

(c) the quotient ring R/m is simple - i.e. it only contains the trivial ideals

a iR/

m =⇒ a = 0 + m or a = R/m

(d) m is coprime to any principal ideal that is not already contained in m,formally that is: for any a ∈ R we get the implication

a 6∈ m =⇒ m + aR = R

(2.6) Proposition: (viz. 372)Let (R,+, ·) be any commutative ring, then we can reformulate the jacobsonradical of R: for any j ∈ R the following two statements are equivalent

(a) j ∈ jacR

(b) ∀ a ∈ R we get 1− aj ∈ R∗

And if a i R is an ideal of R, then a is contained in the jacobson radicaliff 1 + a is a subgroup of the multiplicative group. That is equivalent are

(a) a ⊆ jacR

(b) 1 + a ⊆ R∗

(c) 1 + a ≤g R∗

(2.7) Proposition: (viz. 373)Let (R,+, ·) be a commutative ring and let m1, . . . ,mk ⊆ smaxR befinitely many maximal ideals of R. Then we obtain the following statements

∀ i 6= j ∈ 1 . . . n : mi + mj = R

m1 ∩ · · · ∩mk = m1 . . .mk

m1 ⊃ m1m2 ⊃ . . . ⊃ m1m2 . . .mk

2.1 Maximal Ideals 87

Page 88: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.2 Prime Ideals

(2.8) Definition:Let (R,+, ·) be a commutative ring, then p is said to be a prime ideal ofR, iff the following three statements hold true

(1) p i R is an ideal

(2) p 6= R is proper

(3) ∀ a, b ∈ R we get ab ∈ p =⇒ a ∈ p or b ∈ p

And the set specR of all prime ideals of R is called spectrum of R, formally

specR := p ⊆ R | (1), (2) and (3)

An ideal p∗ i R is said to be a minimal prime ideal of R, iff it is aminimal element of specR. That is p∗ ∈ specR and for any prime idealp ∈ specR we get the implication p∗ ⊆ p =⇒ p∗ = p. Let us finally denotethe set of all minimal prime ideals of R by

sminR :=(

specR)∗

(2.9) Proposition: (viz. 373)Let (R,+, ·) be a commutative ring and p i R be an ideal of R. Then thefollowing four statements are equivalent

(a) p is a prime ideal

(b) the quotient ring R/p is a non-zero (R/p 6= 0) integral domain

(c) the complement U := R\p is multiplicatively closed, this is to say that

(1) 1 ∈ U(2) u, v ∈ U =⇒ uv ∈ U

(d) p 6= R is non-full and for any two ideals a, b iR we get the implication

ab ⊆ p =⇒ a ⊆ p or b ⊆ p

(2.10) Remark:At this point we can reap an important harvest of the theory: maximalideals are prime. Using the machinery of algebra this can be seen in a mostbeautiful way: if m is a maximal ideal, then R/m is a field by (2.5). But fieldsare (by definition non-zero) integral domains. Hence m already is prime by(2.9). Of course it is also possible to give a direct proof - an example of suchwill be presented for (2.19).

88 2 Commutative Rings

Page 89: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.11) Proposition: (viz. 374)Let (R,+, ·) be a commutative ring and p i R be a prime ideal of R, then

(i) If a1, . . . , ak ∈ R are finitely many elements then by induction it isclear that we obtain the following implication

a1 . . . ak ∈ p =⇒ ∃ i ∈ 1 . . . k : ai ∈ p

(ii) Likewiese let a1, . . . , ak i R be finitely many ideals of R, then byinduction we also find the following implication

k⋂i=1

ai ⊆ p =⇒ ∃ i ∈ 1 . . . k : ai ⊆ p

(iii) prime avoidanceConsider some ideals b1, . . . , bn i R where 2 ≤ n ∈ N such thatb3, . . . , bn ∈ specR are prime. If now a i R is another ideal, then weobtain the following implication

a ⊆n⋃i=1

bi =⇒ ∃ i ∈ 1 . . . n : a ⊆ bi

(2.12) Example:In a less formalistic way the prime avoidance lemma can be put like this:if an ideal a is contained in the union of finitely many ideals bi of whichat most two are non-prime, then it already is contained in one of these. Inspite of the odd assumption the statement is rather sharp. In fact it mayhappen that a is contained in the union of three stricly smaller ideals (whichof course are all non-prime in this case). Let us give an example of this: fixthe field E := Z2 and the ideal o := 〈s, t 〉2i = 〈s2, st, t2 〉i i E[s, t]. Thenthe ring in consideration is

R := E[s, t]/o

Note that any residue class of f ∈ E[s, t] can be represented in the formf + o = f [0, 0] + f [1, 0]s + f [0, 1]t + o. That is R contains precisely 6elements, namely a+ bs+ ct+ o where a, b and c ∈ E = Z2. Now take thefollowing ideal

a := 〈s+ o, t+ o 〉i/o

= f + o | f ∈ E[s, t], f [0, 0] = 0 = 0 + o, s+ o, t+ o, s+ t+ o

2.2 Prime Ideals 89

Page 90: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

On the other hand let us also take the following ideals b1 := (s + o)R,b2 := (t+ o)R and b2 := (s+ t+ o)R. Now an easy computation (e.g. for b1

we get (s+ o)(a+ bs+ ct+ o) = as+ o) yields

b1 := 0 + o, s+ o b2 := 0 + o, t+ o b3 := 0 + o, s+ t+ o

And hence it is immediately clear that any bi ⊂ a is a strict subset, but theunion of these ideals precisely is the ideal a again

a = b1 ∪ b2 ∪ b3

(2.13) Lemma: (viz. 375)Let (R,+, ·) and (S,+, ·) be any two commutative rings and denote theirprime spectra by X := spec (R) and Y := spec (S) respectively. Now con-sider some ring-homomorphism ϕ : R → S. Then ϕ induces a well-definedmap on the spectra of S and R, by virtue of

spec (ϕ) : Y → X : q 7→ ϕ−1(q)

For any element a ∈ R let us denote Xa := p ∈ X | a 6∈ p and for anyideal a i R let us denote V(a) := p ∈ X | a ⊆ p . If now a ∈ R, a i Rand b i S, then spec (ϕ) satisfies the following properties

(specϕ)−1(Xa) = Yϕ(a)

(specϕ)(V(b)

)⊆ V

(ϕ−1(b)

)(specϕ)−1 (V(a)) = V (〈ϕ(a) 〉i)

(2.14) Proposition: (viz. 375)

(i) Let (R,+, ·) be a commutative ring and P ⊆ specR be a chain ofprime ideals of R (that is for any p, q ∈ P we have p ⊆ q or q ⊆ p).Then the intersection of all the p ∈ P is a prime ideal of R, formally

∅ 6= P ⊆ specR chain =⇒⋂P ∈ specR prime

(ii) Let ∅ 6= P ⊆ specR be a non-empty set of prime ideals of the com-mutative ring (R,+, ·). And assume that P is closed under ⊆ , i.e.

∀ p ∈ specR ∀ q ∈ P : p ⊆ q =⇒ p ∈ P

Then P already contains a minimal element, formally that is P∗ 6= ∅.

90 2 Commutative Rings

Page 91: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) Let (R,+, ·) be a commutative ring and consider an arbitrary ideala i R and a prime ideal q i R. We now give a short list of possibleconditions we may impose and the respective assumptions that haveto be supposed for a and q in this case

assumption condition(p)

R 6= 0 nonenothing p ⊆ qa 6= R a ⊆ pa ⊆ q a ⊆ p ⊆ q

Then there is a prime ideal p of R that is minimal among all primeideals satisfying the condition imposed. Formally that is

∅ 6= p ∈ specR | condition(p) ∗

(iv) Let (R,+, ·) be a commutative ring, a i R be any ideal and q i R aprime ideal with a ⊆ q. Then there is a prime ideal p∗ minimal overa that is contained in q. Formally that is

∃ p∗ ∈ p ∈ specR | a ⊆ p ∗ with a ⊆ p∗ ⊆ q

(v) Let (R,+, ·) be a commutative ring, a i R be an ideal of R andU ⊆ R be a multiplicatively closed set (that is 1 ∈ U and u, v ∈ Uimplies uv ∈ U). If now a ∩ U = ∅ then the set of all ideals b i Rsatisfying a ⊆ b ⊆ R \ U contains maximal elements. And any suchis maximal element is a prime ideal, formally

∅ 6=b i R | a ⊆ b ⊆ R \ U

∗ ⊆ specR

(2.15) Corollary: (viz. 470)

(i) Let (R,+, ·) be a non-zero R 6= 0, commutative ring, then the set ofzero-divisors of R is the union of a certain set of prime ideals of R

zdR =⋃ p ∈ specR | p ⊆ zdR

(ii) In a commutative ring (R,+, ·) the minimal prime ideals are alreadycontained in the zero-divisors of R, formally that is

p ∈ sminR =⇒ p ⊆ zdR

2.2 Prime Ideals 91

Page 92: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.3 Radical Ideals

(2.16) Definition:Let (R,+, ·) be a commutative ring, a i R be an ideal of R and b ∈ R bean element. Then we define the radical

√a and fraction a : b of a to be

√a :=

a ∈ R | ∃ k ∈ N : ak ∈ a

a : b := a ∈ R | ab ∈ a

Now a is said to be a radical ideal (sometimes this is also called perfectideal in the literature), iff a equals its radical, that is iff we have

(1) a i R

(2) a =√a

And thereby we define the radical spectrum sradR of R to be the set ofall radical ideals of R, formally that is

sradR :=a i R | a =

√a

(2.17) Proposition: (viz. 377)Let (R,+, ·) be a commutative ring, a i R be an ideal of R and b ∈ R bean element. Then we obtain the following statements

(i) The fraction a : b is an ideal of R again containing a, formally that is

a ⊆ a : b i R

(ii) The radical√a even is a radical ideal of R containing a, formally again

a ⊆√a ∈ sradR

(iii) Let (R,+, ·) be a commutative ring, a, b i R ideals and b ∈ R, then

a ⊆ b =⇒ a : b ⊆ b : b

a ⊆ b =⇒√a ⊆

√b

(iv) Let (R,+, ·) be a commutative ring, a i R an ideal and b ∈ R, then

a : b = R ⇐⇒ b ∈ a

Thus if p i R even is a prime ideal then we have precisely two cases

p : b =

R if b ∈ pp if b 6∈ p

(v) The intersection of a collection A 6= ∅ of radical ideals is a radical ideal

∅ 6= A ⊆ sradR =⇒⋂A ∈ sradR

92 2 Commutative Rings

Page 93: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vi) Let again (R,+, ·) be a commutative ring, ai i R be an arbitraryfamily of ideals (i ∈ I) and b ∈ R. Then we find the equality

⋂i∈I

(ai : b) =

(⋂i∈I

ai

): b

(2.18) Proposition: (viz. 378)Let (R,+, ·) be a commutative ring and a i R be an ideal of R. Then thefollowing four statements are equivalent

(a)√a ⊆ a

(b) a =√a is a radical ideal

(c) ∀ a ∈ R we get ∃ k ∈ N : ak ∈ a =⇒ a ∈ a

(d) the quotient ring R/a is reduced, i.e. it contains no non-zero nilpotents

nil(R/

a

)= 0 + a

(2.19) Corollary: (viz. 378)Let (R,+, ·) be a commutaive ring and m, p and a i R be ideals of Rrespectively. Then we have gained the following table of equivalencies

m maximal ⇐⇒ R/m field

p prime ⇐⇒ R/p non-zero integral domain

a radical ⇐⇒ R/a reduced

where a ring S is said to be reduced, iff sk = 0 (for some k ∈⊆ N) impliess = 0. And subsuming the properties of the respective quotient rings wethereby find the inclusions

smaxR ⊆ specR ⊆ sradR

2.3 Radical Ideals 93

Page 94: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.20) Proposition: (viz. 378)

(i) The radical of a is the intersection of all prime ideals containing a, i.e.

√a =

⋂ p ∈ specR | a ⊆ p

=⋂ p ∈ specR | a ⊆ p ∗

(ii) In particular the nil-radical is just the intersection of all prime ideals

nilR =√

0 =⋂

specR =⋂

sminR

(iii) In a commutative ring (R,+, ·) the map a 7→√a is a projection

(i.e. idempotent). That is for any ideal a i R we find√√a =

√a

(iv) Let (R,+, ·) be a commutative ring, a i R be an arbitrary and p i Rbe a prime ideal. If now k ∈ N, then we get the implication

ak ⊆ p ⊆√a =⇒ p =

√a

(v) Let (R,+, ·) be a commutative ring, a, b i R be ideals, then we get√a b =

√a ∩ b =

√a ∩

√b

(vi) And thereby - if (R,+, ·) is a commutative ring, 1 ≤ k ∈ N and a i Ris an ideal, then we obtain the equality√

ak =√a

(vii) Let a i R be a finitely generated ideal in the commutative ring(R,+, ·). And let b i R be another ideal of R. Then we get

a ⊆√

b =⇒ ∃ k ∈ N : ak ⊆ b

(viii) Let again (R,+, ·) be a commutative ring and ai i R be an arbitraryfamily of ideals (i ∈ I), then we obtain the following inclusions

∑i∈I

√a ⊆

√∑i∈I

ai√⋂i∈I

ai ⊆⋂i∈I

√ai

94 2 Commutative Rings

Page 95: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.21) Remark:Due to (2.17.(iii)) the intersection of prime ideals yields a radical ideal.And in (2.19) we have just seen that maximal and prime ideals already areradical. Now recall that the jacobson-radical jacR has been defined to bethe intersection of all maximal ideals. And we have just seen in (2.20.(ii))above that the nil-radical is the intersection of all (minimal) prime ideals.This means that both are radical ideals (and hence the name)

jacR =⋂

smaxR ∈ sradR

nilR =⋂

specR ∈ sradR

(2.22) Example:Note however that taking sums and radicals of (even finitely many) idealsdoes not commute. That is there is no analogous statement to (2.20.(vi))concernong sums a+b. As an example consider R := E[s, t], the polynomialring in two variables, where (E,+, ·) is an arbitrary field. Now pick up

p := (s2 − t)R and q := tR

It is clear that p and q are prime ideals (as the quotient is isomorphic to theintegral domain E[s] under the isomorphisms induced by f(s, t) 7→ f(s, s2)and f(s, t) 7→ f(s, 0) respectively). However the sum p+ q of both not evenis a radical ideal, in fact we get

p + q = s2R+ tR ⊂ sR+ tR =√

p + q

Prob We first prove a+q = s2R+ tR - thus consider any h = (s2− t)f+ tg ∈p + q, then h can be rewritten as h = s2f + t(g − f) ∈ s2R + tR. But con-versely t ∈ q ⊆ p + q and s2 = (s2 − t) + t ∈ p + q. In particular s iscontained in the radical of p + q and hence p + q ⊆ sR + tR ⊆

√p + q.

As sR+ tR is prime (even maximal) this implies the equality of the radicalideal. So finally we wish to prove s 6∈ p + q (and in particlar p + q is noradical ideal. Thus suppose s = s2f + tg for some polynomials f , g ∈ R.Then (as s is prime and does not divide t) s divides g. To be precise we getg = sh for h = (1− st)/t ∈ R. And hence we get s = s2f + sth. Eliminatings we find 1 = sf + th, an equation that cannot hold true (as 1 is of degree0 and sf + th of degree at least 1). Thus we got s2 6∈ s2R+ tR.

(2.23) Example: (viz. 381)Be aware that radicals may be very large when compared with their originalideals. E.g. it may happen that a = aR is a principal ideal, but

√a is not

finitely generated. In fact in the chapter of proofs we give an example wherethere even is no n ∈ N such that (

√a)n ⊆ a, as well.

2.3 Radical Ideals 95

Page 96: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.24) Remark:Combining (2.17.(iii)) and (viii) in the above proposition it is clear that foran arbitrary collection of ideals ai i R (where i ∈ I) in a commutative ring(R,+, ·) we obtain the following equalities√∑

i∈I

√a =

√∑i∈I

ai√⋂i∈I

√ai =

√⋂i∈I

ai

(2.25) Example:The inclusions in (2.20.(viii)) need not be equalities! As an example let usconsider the ring of integers R = Z. And let us take I = N and ai = piZfor some prime number p ∈ Z. As any non-zero element of Z is divisible byfinite powers pi only the intersection of the ai is 0 . On the other hand√ai = pZ due to (2.20.(vii)) and since pZ is a prime (and hence radical)

ideal of Z. Thereby we found the counterexample√⋂i∈N

ai =√

0 = 0 ⊂ pZ =⋂i∈N

pZ =⋂i∈N

√ai

(2.26) Proposition: (viz. 381)Let now ϕ : R → S be a ring-homomorphism between the commutativerings (R,+, ·) and (S,+, ·). And consider the ideals a i R and b i S.Then let us denote the transfered ideals of a and b

b ∩R := ϕ−1(b) i R

aS := 〈ϕ(a) 〉i i S

Note that in case that ϕ is surjective (but not generally), we get aS = ϕ(a).Using these notions we obtain the following statements

(i) √b ∩R =

√b ∩R

(ii)√aS ⊆

√aS =

√√aS

(iii) Now let us abbreviate by ? any of the words maximal ideal, prime idealor radical ideal. Then we find the equivalence

b is a ? ⇐⇒ b ∩R is a ?

And if ϕ : R S even is surjective then we get another equivalence

a + kn(ϕ) is a ? ⇐⇒ aS is a ?

96 2 Commutative Rings

Page 97: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.4 Noetherian Rings

In this section we will introduce and study noetherian and artinian rings.Both types of rings are defined by a chain condition on ideals - ascending inthe case of noetherian and descending in the case of artinian rings. So onemight expect a completely dual theory for these two objects. Yet this is notthe case! In fact it turns out that the artinian property is much stronger:any artinian ring already is noetherian, but the converse is false. An whileexamples of noetherian rings are abundant, there are few artinian rings. Onthe other hand both kinds of rings have several similar properties: e.g. bothare very well-behaved under taking quotients or localisations.

Throughout the course of this book we will see that noetherian rings havelots of beautiful properties. E.g. they allow the Lasker-Noether decomopsi-tion of ideals. However in non-noetherian rings peculiar things may happen.Together with the easy properties of inheritance this makes noetherian ringsone of the predominant objects of commutative algebra and algebraic ge-ometry. Artinian rings are of lesser importance, yet they arise naturally innumber theory and the theory of algebraic curves.

(2.27) Definition: (viz. 413)Let (R,+, ·) be a commutative ring, then R is said to be noetherian iff itsatisfies one of the following four equivalent conditions

(a) R satisfies the ascending chain condition (ACC) of ideals - that is anyfamily of ideals ak i R of R (where k ∈ N) such that

a0 ⊆ a1 ⊆ . . . ⊆ ak ⊆ ak+1 ⊆ . . .

becomes eventually constant - that is there is some s ∈ N such that

a0 ⊆ a1 ⊆ . . . ⊆ as = as+1 = . . .

(b) Every non-empty family of ideals contains a maximal element, formallythat is: for any ∅ 6= A ⊆ idealR there is some a∗ ∈ A∗.

(c) Every ideal of R is finitely generated, formally that is: for any ideala i R there are some a1, . . . , ak ∈ R such that we get

a = 〈a1, . . . , ak 〉i = a1R+ · · ·+ akR

(d) Every prime ideal of R is finitely generated, formally that is: for anyprime ideal p ∈ specR there are some a1, . . . , ak ∈ R such that

p = 〈a1, . . . , ak 〉i = a1R+ · · ·+ akR

(2.28) Remark: (viz. 413)Parts of these equivalencies are no specialty of commutative rings, but ageneral fact of ordered sets. To be precise: if (X,≤) is a partially orderedset, then the following two statements are equivalent:

2.4 Noetherian Rings 97

Page 98: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(a) Any non-empty A 6= ∅ subset A ⊆ X of X has a maximal element.That is there is some a∗ ∈ A such that

∀ a ∈ A : a ≤ a∗ =⇒ a = a∗

(b) Any ascending chain within X stabilizes, that is for any (xk) ⊆ X(where k ∈ N) such that x0 ≤ x1 ≤ x2 ≤ . . . we have

∃ s ∈ N ∀ i ∈ N : xs+i = xs

We will now define artinian rings (by property (a) for the formalists amongus). And again we will find an even longer list of equivalent statements.The similarity in the definition gives strong reason to study these objectssimultaneously. But as often only one half of the equivalencies can be provedstraightforwardly while the other half requires heavy-duty machinery thatis not yet available. For a novice reader it might hence be wise to concen-trate noetherian rings only. These will be the important objects later on.Artinian rings not only are far less common but they also are severely moredifficult to handle.

(2.29) Definition: (viz. ??)Let (R,+, ·) be a commutative ring, then R is said to be artinian iff itsatisfies one of the following six equivalent conditions

(a) R satisfies the descending chain condition (DCC) of ideals - that isany family of ideals ak i R of R (where k ∈ N) such that

a0 ⊇ a1 ⊇ . . . ⊇ ak ⊇ ak+1 ⊇ . . .

becomes eventually constant - that is there is some s ∈ N such that

a0 ⊇ a1 ⊇ . . . ⊇ as = as+1 = . . .

(b) Every non-empty family of ideals contains a minimal element, formallythat is: for any ∅ 6= A ⊆ idealR there is some a∗ ∈ A∗.

(c) R is of finite length as an R-module - that is there is an upper boundon the maximal length of a strictly ascending chain of ideals, formally

`(R) := sup

k ∈ N ∃ a0, a1, . . . , ak i R :

a0 ⊂ a1 ⊂ . . . ⊂ ak

< ∞

(d) R is a noetherian ring and any prime ideal already is maximal, formally

smaxR = specR

(e) R is a noetherian ring and even any minimal prime ideal already is amaximal ideal, formally again

smaxR = sminR

98 2 Commutative Rings

Page 99: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(f) R is a noetherian ring whose jacobson radical equals its nil-radical(i.e. jacR = nilR) and that is semi-local (i.e. #smaxR <∞).

(2.30) Example:

• Any field E is an artinian (and noetherian) ring, as it only containsthe trivial ideals 0 and E. In particular the chain of ideals with strictinclusions is 0 ⊂ E, and this is finite.

• Any finite ring - such as Zn - is artinian (and noetherian), as it onlycontains finitely many ideals (and in particular any chain of ideals hasto be eventually constant).

• We will study principal ideal domains (PIDs) in section 2.6. These areintegral domains R in which every ideal a i R is generated by someelement a ∈ R (that is a = aR). In particular any ideal is finitelygenerated and hence any PID is noetherian.

• The integers Z form an easy example of a noetherian ring (as theyare a PID). But they are not artinian, e.g. they contain the infinitelydecreasing chain of ideals Z ⊃ 2Z ⊃ 4Z ⊃ 8Z ⊃ . . . ⊃ 2kZ ⊃ . . .

• We will soon see (a special case of the Hilbert basis theorem) that thepolynomial ring R = E[t1, . . . , tn] in finitely many variables over a fieldE is a noetherian ring. And again this is no artinian ring, as it containsthe infinitely decreasing chain of ideals R ⊃ t1R ⊃ t21R ⊃ . . .

• Of course there are examples of non-noetherian rings, too. E.g. let Ebe a field and consider R := E[ti | 1 ≤ i ∈ N] the polynomial ringin (countably) infinitely many indeterminates. Then R is an integraldomain but it is not noetherian. Just let pk := 〈t1, t2, . . . , tk 〉i bethe ideal generated by the first k indeterminates. Then we obtain aninfinitely ascending chain of (prime) ideals p1 ⊂ p2 ⊂ . . . ⊂ pk ⊂ . . .

• Non-noetherian rings may well lie inside of noetherian rings. As anexample regard the ring R = E[ti | 1 ≤ i ∈ N] again. As R is anintegral domain R is contained in its quotient field F . But as F is afield it in particular is noetherian.

• (♦) We would finally like to present an example from complex analy-sis: let O := f : C→ C | f holomorphic be the ring of entire func-tions (note that this is isomorphic to the ring Cz of globally conver-gent power series). Now let ak := f ∈ O | ∀ k ≤ n ∈ N : f(n) = 0 .Clearly the ak i O are ideals of O they even form a strictly ascend-ing chain of ideals (this is a special case of the Weierstrass producttheorem). Again O is an integral domain (by the identity theorem ofholomorphic functions) but - as we have just seen - it is not noetherian.

2.4 Noetherian Rings 99

Page 100: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.31) Proposition: (viz. 416)Let (R,+, ·) be a commutative ring and fix ? as an abbreviation for either ofthe words noetherian or artinian. Then the following statements hold true:

(i) If a i R is an ideal and R is ?, then the quotient R/a also is a ? ring.

(ii) Let (S,+, ·) be any ring and let ϕ : R S be a surjective ring-homomorphism. If now R is a ? ring, then S is a ? ring, too.

(iii) Suppose R and S are commutative rings. Then the direct sum R⊕ Sis a ? ring if and only if both R and S are ? rings.

(2.32) Remark:We are about to formulate Hilbert’s basis theorem. Yet in order to do thiswe first staighten out a few thigs for the reader who is not familiar with thenotion of an R-algebra. Consider a commutative ring (S,+, ·) and a subringR ≤r S of it. Further let E ⊆ S be any subset, then we introduce theR-subring of S generated by E to be the following

R[E] :=

m∑i=1

ai

n∏j=1

ei,j

∣∣∣∣∣∣ m,n ∈ N, ai ∈ R, ei,j ∈ E

Nota that 0 ∈ R[X] as m = 0 and 1 ∈ R[E] as n = 0 are allowed. Andthereby R[E] is a subring of S, containing R. In fact R[E] is precisely theR-subalgebra of S generated by E in the sense of section 3.2. And for thosewho already know the notion of polynomials in several variables this canalso be put in the following form

R[E] = f(e1, . . . , en) | 1 ≤ n ∈ N, f ∈ R[t1, . . . , tn], ei ∈ E

In particular if E is a finite set - say E = e1, . . . , ek - we also writeR[e1, . . . , ek] := R[E]. And this is just the image of the polynomial ringR[t1, . . . , tn] under the evaluation homomorphism ti 7→ ei.

(2.33) Theorem: (viz. 417) Hilbert Basis TheoremLet (S,+, ·) be a commutative ring and consider a subring R ≤r S. Furthersuppose that S is finitely generated (as an R-algebra) over R, that is thereare finitely many e1, . . . , en ∈ S such that S = R[e1, . . . , en]. If now R is anoetherian ring, then S is a noetherian ring, too

R noetherian =⇒ S = R[e1, . . . , en] noetherian

100 2 Commutative Rings

Page 101: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.34) Remark:The major work in proving Hilbert’s Basis Theorem lies in regarding thepolynomial ring S = R[t]. The rest is just induction and (2.31.(iii)). And forthis we chose a constructive proof. That is given a non-zero ideal 0 6= u i Swe want to find a finite set of generators of u. To do this let

ak := lc (f) | f ∈ u, deg(f) = k ∪ 0

As ak is an ideal of R it is finitely generated, say ak = 〈ak,1, . . . , ak,n(k) 〉iwhere ak,i = fk,i for some fk,i ∈ u with deg(fk,i = k and ak,i = lc (fk,i).Further the ak form an ascending chain and hence there is some s ∈ N suchthat as = as+1 = as+2 = . . . . Then a trick argument shows

u = 〈fk,1 | k ∈ 0 . . . s, i ∈ 1 . . . n(k) 〉i

(2.35) Definition:Let (R,+, ·) be any commutative ring and q, q i R be a prime ideals ofR. Then we say that q lies directly under q if q is maximal among theprime ideals contained in q. And we abbreviate this by writing q ⊂∗ q.Formally that is

q ⊂∗ q :⇐⇒ q ∈ p ∈ specR | p ⊂ q ∗

⇐⇒ (1) q ∈ specR with q ⊂ q

(2) ∀ p ∈ specR : q ⊆ p ⊆ q =⇒ q = p or p = q

(2.36) Proposition: (viz. 419)Let (R,+, ·) be a noetherian ring, then we obtain following three statements

(i) Any non-empty set of ideals of R contains a maximal element. Thatis for any ∅ 6= A ⊆ idealR there is some a∗ ∈ A such that

∀ a ∈ a we get a∗ ⊆ a =⇒ a∗ = a

(ii) Any non-empty set of prime ideals of R contains maximal and minimalelements. That is for any ∅ 6= P ⊆ specR we get

P∗ 6= ∅ and P∗ 6= ∅

(iii) In particular for any two prime ideals p, q i R with p ⊂ q there isa prime ideal q lying directly under q. Formally that is

∃ q ∈ specR : p ⊆ q ⊂∗ q

(iv) Let a i R be any ideal except a 6= R, then there only are finitelymany prime ideals lying minimally over a. Formally that is

1 ≤ # p ∈ specR | a ⊆ p ∗ < ∞

2.4 Noetherian Rings 101

Page 102: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.37) Proposition: (viz. ??)Let (R,+, ·) be an artinian ring, then we obtain following seven statements

(i) An artinian integral domain already is a field, that is the equivalence

R artinian, integral domain ⇐⇒ R field

(ii) Any artinian ring already is a noetherian ring, that is the implication

R artinian =⇒ R noetherian

(iii) Maximal, prime and minimal prime ideals of R all coincide, formally

smaxR = specR = sminR

(iv) R is a semilocal ring, that is it only has finitely many maximal ideals

#smaxR < ∞

(v) The jacobson radical and nil-radical coincide and are nilpotent, that is

jacR = nilR and ∃n ∈ N : (jacR)n = 0

(vi) The reduction R/nilR is isomorphic to the (finite) direct sum of itsresidue fields - to be precise we get

R/nilR

∼=r

⊕m ∈ smaxR

R/m

(vii) R can be decomposed into a finite number of local artinian rings. Thatis there are local, artinian rings Li (where i ∈ 1 . . . r := #smaxR) with

R ∼=r L1 ⊕ · · · ⊕ Lr

(2.38) Corollary: (viz. 419)Let (R,+, ·) be any commutative ring, then R is the union of a net ofnoetherian subrings. That is there is a family (Ri) (where i ∈ I) of subringssuch that the following statements hold true

(1) Ri ≤r R is a subring for any i ∈ I

(2) Ri is a noetherian ring for any i ∈ I

(3) R =⋃i∈I Ri

(4) ∀ i, j ∈ I ∃ k ∈ I such that Ri ∪Rj ⊆ Rk

102 2 Commutative Rings

Page 103: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.5 Unique Factorisation Domains

The reader most probably is aware of the fact, that any integer n ∈ Z (apartfrom 0) admits a decomposition into primes. In this and the next section wewill study what kinds of rings qualify to have this property. We will see thatneotherian integral domains come close, but do not suffice. And we will seethat PIDs will allow prime decomposition. That is the truth lies somewherein between but cannot be pinned down precisely.

According to the good, old traditions of mathematics we’re in for a defini-tion: a ring will simply said to be factorial, iff it allows prime decomposition(a mathematical heretic might object: ”If you can’t prove it, define it!”).Then PIDs are factorial. And the lemma of Gauss is one of the few answersin which cases the factorial property is preserved.

So we have to start with a lengthy series of (luckily very easy) definitions.But once we have assembled the language needed for prime decompositionwe will be rewarded with several theorems of great beauty - the fundamentaltheorem of arithmetic at their center.

(2.39) Definition: (viz. 420)Let (R,+, ·) be a commutative ring and a, b ∈ R be two elements of R.Then we say that a divides b (written as a | b) iff one of the followingthree equivalent properties is satisfied

(a) b ∈ aR

(b) bR ⊆ aR

(c) ∃h ∈ R : b = ah

And we define the order of a in b to be the highest number k ∈ N such thatak still divides b (respectively ∞ if ak divides b for any k ∈ N)

〈a | b〉 := sup k ∈ N | b ∈ akR ∈ N ∪ ∞

Now let R even be an integral domain. Then we say that a and b are asso-ciated (written as a ≈ b) iff one of the following three equivalent propertiesis satisfied

(a) aR = bR

(b) a | b and b | a

(c) ∃α ∈ R∗ : b = αa

2.5 Unique Factorisation Domains 103

Page 104: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.40) Proposition: (viz. 421)

(i) Let (R,+, ·) be any commutative ring then for any a, b and c ∈ R therelation of divisibility has the following properies

1 | ba | 0

a | ab0 | b ⇐⇒ b = 0

a | 1 ⇐⇒ a ∈ R∗

a | b =⇒ ac | bc

(ii) If a, b and u ∈ R such that u ∈ nzdR is a non-zero divisor of R, thenwe even find the following equivalence

a | b ⇐⇒ au | bu

(iii) Divisibility is a reflexive, transitive relation that is antisymmetric upto associativity. Formally that is for any a, b and c ∈ R

a | aa | b and b | c =⇒ a | ca | b and b | a =⇒ a ≈ b

(iv) Let (R,+, ·) be an integral domain, then associateness ≈ is an equiva-lence relation on R. And for any a ∈ R∗ its equivalence class [a] under≈ is given to be aR∗. Formally that is

[a] = aR∗ := αa | α ∈ R∗

(2.41) Remark:If (R,+, ·) is an integral domain and a, b ∈ R such that a 6= 0 and a | bthen the divisor h ∈ R satisfying b = ah is uniquely determined. As wesometimes wish to refer to the divisor, it deserves a name of its own. Henceif R is an integral domain, a, b ∈ R with a 6= 0 and a | b we let

a

b:= h such that b = ah

Prob we have to show the uniqueness: thus suppose b = ag and b = ah thena(g−h) = 0. And as a 6= 0 and R is an integral domain this yields g−h = 0and hence g = h.

104 2 Commutative Rings

Page 105: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.42) Example:Let (R,+, ·) be an commutative ring and a, b ∈ R. Let us denote the idealgenerated by a and b by a := aR+bR. Now consider any polynomial f ∈ R[t],1 ≤ k ∈ N and let d := deg(f). Then a straightforward computation yields

f(a)ak − f(b)bk = (a− b)

d∑i=0

f [i]k−1+i∑j=0

ajbk−j

A special case of this is the third binomial rule a2−b2 = (a−b)(a+b) whichis obtained by letting f(t) := t. Thus this cumbersome double sum on theright is the divisor of f(a)ak − f(b)bk times a − b. And in the case of anintegral domain (and a 6= b) this even is uniquely determined. And from theexplict representation of the divisor one also finds

f(a)ak − f(b)bk

a− b=

d∑i=0

f [i]

k−1+i∑j=0

ajbk−j ∈ ak−1

(2.43) Remark:Let (R,+, ·) be a commutative ring, recall that we have defined the set ofrelevant elements of R to be the non-zero, non-units of R

R• := R \(R∗ ∪ 0

)• R• is empty if and only if R = 0 or R is a field, that is equivalent are

R• = ∅ ⇐⇒ R = 0 or R field

Prob if R = 0 then trivially R• = ∅. Conversely R• = ∅ if and only ifR∗ = R \ 0. And this is just the definition of R being a field.

• The complement of R• is multiplicatively closed (recall that this canbe put as: 1 6∈ R• and a, b 6∈ R• =⇒ ab 6∈ R•)

R \R• = R∗ ∪ 0 is multiplicatively closed

Prob first of all we have 1 ∈ R∗ ∪0 = R \R• (wether R = 0 or not).Now consider a, b ∈ R \ R•. If a = 0 or b = 0 then ab = 0 and henceab ∈ R \R•. Else if a, b 6= 0 then a, b ∈ R∗ and hence ab ∈ R∗, too.

• If R is an integral domain, then R• is closed under multiplication, i.e.

R integral domain =⇒(a, b ∈ R• =⇒ ab ∈ R•

)Prob if a, b ∈ R• then a, b 6= 0 and hence ab 6= 0, as R is an integraldomain. Also we get a, b 6∈ R∗ and likewise this implies ab 6∈ R∗ (elsewe had a−1 = b(ab)−1 for example). Together this means ab ∈ R•.

2.5 Unique Factorisation Domains 105

Page 106: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.44) Definition:Let (R,+, ·) be a commutative ring and p ∈ R be an elements of R. Thenp is said to be irreducible if p is a relevant element of R but cannot befurther factored into relevant elements. Formally iff

(1) p ∈ R•

(2) ∀ a, b ∈ R : p = ab =⇒ a ∈ R∗ or b ∈ R∗

Likewise p ∈ R is said to be a prime element of R, iff it is a relevant elementof R such that it divides (at least) one of the factors in every product itdivides. Formally again, iff

(1) p ∈ R•

(2) ∀ a, b ∈ R : p | ab =⇒ p | a or p | b

(2.45) Definition:Let (R,+, ·) be a commutative ring and a ∈ R and let ? abbreviate eitherthe word prime or the word irreducible. Then a tupel (α, p1, . . . , pk) (withk ∈ N and k = 0 allowed!) is said to be a ? decompostion of a, iff

(1) α ∈ R∗

(2) a = αp1 . . . pk

(3) ∀ i ∈ 1 . . . n : pi ∈ R is ?

If (α, p1, . . . , pk) at least satisfies (1) and (3) then let us agree to call it a? series. Now two ? series (α, p1, . . . , pk) and (β, q1, . . . , ql) are said to beessentially equal (written as (α, p1, . . . , pk) ≈ (β, q1, . . . , ql)), iff k = l,both decompose the same element and the pi and qi are pairwise associated.Formally that is (α, p1, . . . , pk) ≈ (β, q1, . . . , ql) :⇐⇒ (1), (2) and (3) where

(1) k = l

(2) αp1 . . . pk = βq1 . . . ql

(3) ∃σ ∈ Sk : ∀ i ∈ 1 . . . k : pi ≈ qσ(i)

(2.46) Example: (♦)Let (R,+, ·) be an integral domain and a ∈ R, then the linear polyonmialp(t) := t− a ∈ R[t] is prime. Note that this need not be true unless R is anintegral domain. E.g. for R = Z6 we have

t− 1 = (2t+ 1)(3t− 1) but t− 1 |6 2t+ 1, t− 1 |6 3t− 1

Prob consider f , g ∈ R[t] such that p | fg. that is there is some h ∈ R[t]such that (t−a)h = fg. In particular f(a)g(a) = (fg)(a) = (a−a)h(a) = 0.As R is an integral domain this means f(a) = 0 or g(a) = 0. And from thisagain we get p = t− a | f or p = t− a | g.

106 2 Commutative Rings

Page 107: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.47) Definition:

• Let (R,+, ·) be a commutative ring again and ∅ 6= A ⊆ R be a non-empty subset of R. Then we say that m ∈ R is a common multipleof A, iff every a ∈ A divides m, formally

A | m :⇐⇒ ∀ a ∈ A : a | m

• And thereby m is said to be a least common multiple of A, iff mis a common multiple that divides any other common multiple

(1) A | m

(2) ∀n ∈ R : A | n =⇒ m | n

And we denote the set of least common multiples of A by lcm(A) i.e.

lcm(A) := m ∈ R | (1) and (2)

• Analogous to the above we say that an element d ∈ R is a commondivisor of A, iff d divides every a ∈ A, formally again

d | A :⇐⇒ ∀ a ∈ A : d | a

• And thereby d is said to be a greatest common divisor of A, iff dis a common divisor that is divided by any other common divisor

(1) d | A

(2) ∀ c ∈ R : c | A =⇒ c | d

And we denote the set of greatest common divisors of A by gcd(A) i.e.

gcd(A) := d ∈ R | (1) and (2)

• Finally A is said to relatively prime, iff 1 is a greatest commondivisor of A. And by (2.40) this can also be formulated, as

A relatively prime :⇐⇒ 1 ∈ gcd(A)

⇐⇒ ∀ c ∈ R : c | A =⇒ c ∈ R∗

By abuse of notation a finite family of elements a1, . . . , ak ∈ R is saidto be relatively prime, iff the set a1, . . . , ak is relatively prime.

(2.48) Proposition: (viz. 421)

(i) Let (R,+, ·) be a commutative ring, p ∈ R be a prime element anda1, . . . , ak ∈ R be arbitrary elements of R. Then we obtain

p |k∏i=1

ai =⇒ ∃ i ∈ 1 . . . k : p | ai

2.5 Unique Factorisation Domains 107

Page 108: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) Let (R,+, ·) be a commutative ring and p ∈ R. Then p is a primeelement iff p 6= 0 and the principal ideal pR i R is a prime ideal

p ∈ R prime ⇐⇒ p 6= 0 and pR i R prime

(iii) Let (R,+, ·) be a commutative ring, p ∈ R and α ∈ R∗ a unit of R.Let us again abbreviate the word prime or irreducible by ?, then

p is ? ⇐⇒ αp is ?

(iv) Let (R,+, ·) be an integral domain 0 6= a ∈ R and b ∈ R• be a non-zeronon-unit. Then we obtain the following strict inclusion of ideals

(ab)R ⊂ aR

(v) Let (R,+, ·) be an integral domain. Then any prime element of Ralready is irreducible. That is for any p ∈ R we get

p is prime =⇒ p is irreducible

(vi) Let (R,+, ·) be an integral domain. Then the set D of all d ∈ R thatallow a prime decomposition is saturated and multiplicatively closed.That is D := αp1 . . . pk | α ∈ R∗, k ∈ N, pi ∈ R prime satisfies

1 ∈ D

c, d ∈ D ⇐⇒ cd ∈ D

(vii) Let (R,+, ·) be an integral domain, p1, . . . , pk, q1, . . . , ql ∈ R be a finitecollection of prime elements of R (where k, l ∈ N with k = 0 and l = 0allowed) and α, β ∈ R∗ be two units of R. Then we obtain

αp1 . . . pk = βq1 . . . ql =⇒ k = l and ∃σ ∈ Sk such that∀ i ∈ 1 . . . k : pi ≈ qσ(i)

(viii) Let (R,+, ·) be an integral domain and ∅ 6= A ⊆ R be a non-emptysubset of R. Further let d, m ∈ R be two elements of R, then

m ∈ lcm(A) ⇐⇒ lcm(A) = mR∗

d ∈ gcd(A) ⇐⇒ gcd(A) = dR∗

(2.49) Theorem: (viz. 423)Let (R,+, ·) be a noetherian integral domain and fix some elements a, b andp ∈ R where p is prime. Then we obtain the following statements

(i) If a ∈ R• then the order in which a divides b is finite, formally that is

〈a | b〉 ∈ N

108 2 Commutative Rings

Page 109: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) The order of a prime element p ∈ R turns multiplications into additions

〈p | ab〉 = 〈p | a〉+ 〈p | b〉

(iii) Any a 6= 0 admits an irreducible decomposition. That is there aresome α ∈ R∗ and finitely many irreducibles p1, . . . , pk ∈ R such that

a = αp1 . . . pk

(iv) If 0 6= p i R is any prime ideal of R then p is generated by finitalymany irreducibles. That is there are p1, . . . , pk ∈ R irreducible with

p = 〈p1, . . . , pk 〉i

(v) Let P ⊆ R• be a set of parwise non-associate non-zero non-units ofR. That is we assume that for any p, q ∈ P we get p ≈ q =⇒ p = q.Then for any a 6= 0 the number of p ∈ P that divide a is finite, i.e.

# p ∈ P | a ∈ pR < ∞

(vi) Let b ∈ R• be a non-zero, non-unit of R. Further consider a finitecollection q1, . . . , qk ∈ R of prime elements of R that are pairwisenon-associate. That is for any i, j ∈ 1 . . . k we assume

qi ≈ qj =⇒ i = j

If now p ∈ R is any prime element of R then we obtain the statements

a :=k∏i=1

q〈qi|b〉i

∣∣∣ bp | b =⇒ p | b

axor ∃ i ∈ 1 . . . k : p ≈ qi

(2.50) Definition: (viz. 426)We call (R,+, ·) an unique factorisation domain (which we will alwaysabbreviate by UFD), iff R is an integral domain that further satisfies oneof the following equivalent properties

(a) Every non-zero element admits a decomposition into prime elements.That is for any 0 6= a ∈ R we get the following statement

∃ (α,p) = (α, p1, . . . , pk) prime decomposition of a

(b) Every non-zero element admits an essentially unique decompositioninto irreducible elements. That is for any 0 6= a ∈ R we have

(1) ∃ (α,p) = (α, p1, . . . , pk) irreducible decomposition of a

(2) (α,p), (β,q) irreducible decompositions of a =⇒ (α,p) ≈ (β,q)

2.5 Unique Factorisation Domains 109

Page 110: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(c) Every non-zero element admits a decomposition into irreducible ele-ments and every irreducible element is prime. Formally that is

(1) ∀ 0 6= a ∈ R ∃ (α,p) irreducible decomposition of a

(2) ∀ p ∈ R : p prime ⇐⇒ p irreducible

(d) The principal ideals of R satisfy the ascending chain condition andevery irreducibe element is prime, Formally that is

(1) for any family of elements ak ∈ R (where k ∈ N) of R such thata0R ⊆ a1R ⊆ . . . ⊆ akR ⊆ ak+1R ⊆ . . . there is some s ∈ Nsuch that for any i ∈ N we get as+iR = asR.

(2) ∀ p ∈ R : p prime ⇐⇒ p irreducible

(e) Any non-zero prime ideal of R contains a prime element of R, formally

∀ 0 6= p ∈ specR ∃ p ∈ P prime : p ∈ p

(f) (♦) Let D := αp1 . . . pk | α ∈ R∗, k ∈ N, pi ∈ R prime denotethe set of all d ∈ R that allow a prime decomposition again. Theneither R = 0 is the zero ring or the quotient field of R is precisely thelocaslisation of R in D, formally

quotR = D−1R

(2.51) Remark:

• The zero-ring R = 0 trivially is an UFD. As it does not contain a singlenon-zero element such that condition (a) of UFDs is void (hence true).

• Propery (b) guarantees that - in an UFD (R,+, ·) - any nonzero ele-ment 0 6= a ∈ R admits an irreducible (equivalently prime) decompo-sition a = αp1 . . . pk, that even is essentially unique. And this unique-ness determines the number k of irreducible factors. Hence we maydefine the length of a to be precisely this number

`(a) := k where (α, p1, . . . , pk) prime decomposition of a

• Any field (E,+, ·) is an UFD, as we allowed k = 0 for a prime de-composition (α, p1, . . . , pk). To be precise if 0 6= a ∈ E then a ∈ E∗already is a unit and hence (a) is a prime decomposition of a.

• In the next section we will prove that any PID (i.e. an integral domainin which every ideal can be generated by a single element) is an UFD.

• If (R,+, ·) is a noetherian integral domain in which any irreducibleelement is prime, then R is an UFD. This is clear from property (d).

• If (R,+, ·) is an UFD then so is the polynomial ring R[t] (this is theone of the lemmas of Gauss that will be proved in (??)).

• (♦) If (R,+, ·) is an UFD and U ⊆ R is multiplicatively closed, thenU−1R is an UFD too. In fact, if p ∈ R is a prime element then p/1either is a unit or a prime in U−1R. This will be proved in (2.110).

110 2 Commutative Rings

Page 111: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• (♦) A subring O ≤r C of the complex numbers is said to be analgebraic number ring, iff any a ∈ O satisfies an integral equation overZ (that is there is a normed polynomial f ∈ Z[t] such that f(a) = 0).We will see that any algebraic number ring O is a Dedekind domain.And it is also true, that O is an UFD if and only if it is a PID.

• This list of UFDs is almost exhaustive. At the end of this sectionwe will append a list of counter examples such that the reader mayconvince himself that the above equivalencies cannot be generalized.

(2.52) Remark:It is oftenly useful not to regard all the prime (analogously irreducible)elements of R but to restrict to a representative set modulo associateness.That is a subset P ⊆ R such that we obtain a bijection of the form

P ←→ p ∈ R | p prime /≈ : p 7→ pR∗

Prob this is possible since ≈ is an equivalence relation on R of which theequivalence classes are precisely aR∗ by (2.40). In particular ≈ also is onequivalence relation on the subset P of prime elements of R. And by (2.48)we know that for any p ∈ P we have [p] = q ∈ P | p ≈ q = pR∗ again.Hence we may choose P as a representing system of P/ ≈ by virtue of theaxiom of choice.

(2.53) Example:

• The units of the integers Z are given to be Z∗ = −1,+1. That isassociateness identifies all elements having the same absolute value,aZ∗ = −a, a. And thereby we can choose a representing system ofZ modulo ≈ simply by choosing all positive elements a ≥ 0. Thus wefind a representing system of the prime elements of Z by

P := p ∈ Z | p prime, 0 ≤ p

• (♦) Now let (E,+, ·) be any field and consider the polynomial E[t].Then we will see that the units of E[t] are given to be E[t]∗ = E∗ =at0 | 0 6= a ∈ E. Thus associateness identifies all polynomialshaving the same coefficients up to a common factor. That is we getfE[t]∗ = af | 0 6= a ∈ E. Thus by requiring a polynomial f to benormed (i.e. f [deg(f)] = 1) we eliminate this ambiguity. Therefore wefind a representing system of the prime elements of E[t] by

P := p ∈ E[t] | p prime, normed

(2.54) Theorem: (viz. 427)Let (R,+, ·) be an UFD, p ∈ R be a prime element and 0 6= a, b ∈ R be anytwo non-zero elements. Then the following statements hold true

(i) Suppose (α, p1, . . . , pk) is a prime decomposition of a. Then for anyprime p ∈ R the order of p in a can be expressed as

〈p | a〉 = # i ∈ 1 . . . k | p ≈ pi ∈ N

2.5 Unique Factorisation Domains 111

Page 112: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) The order of a prime element p ∈ R turns multiplications into additions

〈p | ab〉 = 〈p | a〉+ 〈p | b〉

(iii) a divides b if and only if for any prime p the order of p in a does notexceed the order of p in b. Formally that is the equivalence

a | b ⇐⇒ ∀ p ∈ R prime : 〈p | a〉 ≤ 〈p | b〉

(iv) Let P be a representing system of the prime elements modulo associ-ateness. Then for any 0 6= a ∈ R we obtain a unique representation

∃ ! α ∈ R∗ ∃ ! n = (n(p)) ∈⊕p∈P

N : a = α∏p∈P

pn(p)

and thereby we even have n(p) = 〈p | a〉 such that the sum of all n(p)is precisely the length of a (and in particular finite). Formally that is

`(a) =∑p∈P

n(p) < ∞

Nota that n ∈ ⊕pN, that is n : P→ N is a map such that the numberof p ∈ P with n(p) 6= 0 is finite. And hence the product over all pn(p)

(where p ∈ P) well-defined using the notations of section 1.3.

(v) Let P be a representing system of the prime elements modulo asso-ciateness again. And consider a non-empty subset ∅ 6= A ⊆ R suchthat 0 6∈ A. Then A has a greatest common divisor, namely∏

p∈Ppm(p) ∈ gcd(A) where m(p) := min 〈p | a〉 | a ∈ A

Likewise if A = a1, . . . , an ⊆ R is a non-empty, finite (1 ≤ n ∈ N)subset with 0 6∈ A then A has a least common multiple, namely∏

p∈Ppn(p) ∈ lcm(A) where n(p) := max 〈p | a〉 | a ∈ A

(vi) Let A := a, b and d ∈ gcd(A) be a greatest common divisor, m ∈lcm(A) a least common multiple of a and b. Then we obtain

ab ≈ dm

(vii) If 0 6= a, b, c ∈ R are any three non-zero elements of R, then theleast common multiple and greatest common divisor of these can beevaluated recursively. That is if d ∈ gcda, b and m ∈ lcma, b then

gcd a, b, c = gcd d, c lcm a, b, c = lcm m, c

112 2 Commutative Rings

Page 113: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.55) Proposition: (viz. 429) (♦)Any UFD R is normal. Thereby a commutative ring (R,+, ·) is said to benormal (or integrally closed), iff R is an integral domain that is integrallyclosed in its quotient field. That is R is normal, iff

(1) R is an integral domain

(2) for any a, b ∈ R with a 6= 0 get: if there is some normed polynomialf ∈ R[t] such that f(b/a) = 0 ∈ quotR, then we already had a | b.

(2.56) Example: (viz. 430)

(i) UFDs need not be noetherian! As an example regard a field (E,+, ·),then the polynomial ring in countably many variables E[ti | i ∈ N] isan UFD by the lemma of Gauss (??)). But it is not noetherian (bythe examples in section 2.4).

(ii) Noetherian integral domains need not be UFDs! As an example wewould like to present the following algebraic number ring

Z[√−3] :=

a+ ib

√3 | a, b ∈ Z

⊆ C

It is quite easy to see that in this ring 2 ∈ Z[√−3] is an irreducible

element, that is not prime. Hence Z[√−3] cannot be an UFD. Never

the less it is an integral domain (being a subring of C) and noetherian(by Hilbert’s basis theorem (2.33), as it is generated by

√−3 over Z).

(iii) Residue rings (even integral domains) of UFDs need not be UFDs! Asan example let us start with the polynomial ring R[s, t] in two variablesover the reals. Then R[s, t] is an UFD by the lemma of Gauss (??).Now consider the residue ring

R := R[s, t]/p where p := (s2 + t2 − 1)R[s, t]

Then p(s, t) = s2 + t2 − 1 ∈ R[s, t] is an irreducible (hence prime)polynomial, such that p is a prime ideal. Yet we are able to provethat R is no UFD (in fact we will prove that there is an irreducible,non-prime element in a certain localisation of R).

2.5 Unique Factorisation Domains 113

Page 114: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.6 Principal Ideal Domains

(2.57) Definition:Let (R,+, ·) be an integral domain and ν be a mapping of the following form

ν : R \ 0 → N ∪ −∞

Then the ordered pair (R, ν) is said to be an Euclidean domain, iff Rallows division with remainder - that is for any a, b ∈ R with a 6= 0 thereare q, r ∈ R such that we get

(1) b = qa+ r

(2) ν(r) < ν(a) or r = 0

(2.58) Remark:

• We will employ a notational trick to eliminate the distinction of thecase r = 0 in the definition above. To do this we pick up a symbol−∞ and set −∞ < n for any n ∈ N. Then ν will be extended to

ν : R→ N ∪ −∞ : a 7→ν(a) if a 6= 0−∞ if a = 0

Thereby the property to allow division with remainder simply readsas: for any a, b ∈ R with a 6= 0 there are q, r ∈ R such that we get

(1) b = qa+ r

(2) ν(r) < ν(a)

• If (R, ν) is an Euclidean domain and a ∈ R satisfies ν(a) = 0, thena ∈ R∗ already is a unit of R. The converse need not be true however.That is we have the incluison (but not necessarily equality)

a ∈ R | ν(a) = 0 ⊆ R∗

Prob using division with remainder we may find q, r ∈ R such that1 = qa + r and ν(r) < ν(a) = 0. But this can only be if ν(r) = −∞and hence r = 0. But this means 1 = qa again and hence a ∈ R∗.

(2.59) Example: (viz. 432)

(i) If (E,+, ·) is a field, then (E, ν) is an Euclidean domain under anyfunction ν : E \ 0 → N. Because for any 0 6= a ∈ E and b ∈ E wemay let q := ba−1 and r := 0.

(ii) The ringZ of integers is an Euclidean domain (Z, α) under the (slightlymodified) absolute value α as an Euclidean function

α : Z→ N ∪ −∞ : k 7→

a if a > 0−a if a < 0−∞ if a = 0

114 2 Commutative Rings

Page 115: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

In fact consider 1 ≤ a ∈ Z and any b ∈ Z then for q := b div a andr := b mod a (see below for a definition) we have b = aq + r and0 ≤ r < a (in particular α(r) < α(a)). And in case a < 0 we canchoose q and r similarly to have division with remainder.

b div a := max q ∈ Z | aq ≤ b b mod a := b−

(b div a

)· a

(iii) If (E,+, ·) is a field then the polynomial ring E[t] (in one variable) isan Euclidean domain (E[t], deg) under the degree

deg : E[t] \ 0 : f 7→ max k ∈ N | f [k] 6= 0

This is true because of the division algorithm for polynomials - see(??). Note that the division algorithm can be applied as any non-zeropolynomial can be normed, since E is a field. Hence R[t] will not be aEuclidean ring, unless R is a field. Also, though the division algorithmfor polynomials can be generalized to Buchberger’s algorithm it isuntrue that the polynomial ring E[t1, . . . , tn] is an Euclidean domain(for n ≥ 2) under some Euclidean function ν.

(iv) Consider any d ∈ Z such that√d 6∈ Q (refer to (??) for this). Then

we consider the subring of C generated by√d, that is we regard

Z[√d] :=

a+ b

√d | a, b ∈ Z

⊆ C

Then the representation x = a + b√d ∈ Z[

√d] is unique (that is for

any a, b, f and g ∈ Z we get a+ b√d = f + g

√d ⇐⇒ (a, b) = (f, g)).

And thereby we obtain a well-defined ring-automorphism on Z[√d] by

virtue of x = a+ b√d 7→ x := a− b

√d. Now define

ν : Z[√d] \ 0 → N : x = a+ b

√d 7→ |xx| = |a2 − db2|

Then ν is multiplicative, that is ν(xy) = ν(x)ν(y) for any x, y ∈ Z[√d].

And it is quite easy to see that the units of Z[√d] are given to be

Z[√d]∗ =

x ∈ Z[

√d]∣∣∣ ν(x) = 1

We will also prove that for d ≤ −3 the ring Z[

√d] will not be an

UFD, as 2 is an irreducible element, that is not prime. Yet for d ∈−2,−1, 2, 3, we find that (Z[

√d], ν) even is an Euclidean domain.

(2.60) Proposition: (viz. 434)Let (R, ν) be an Euclidean domain and 0 6= a, b ∈ R be two non-zeroelements. Then a and b have a greatest common divisor g ∈ R that can becomputed by the following (so called Euclidean) algorithm

2.6 Principal Ideal Domains 115

Page 116: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

input 0 6= a, b ∈ Rinitialisation if ν(a) ≤ ν(b)

then (f ← a, g ← b)else (f ← b, g ← a)

algorithm while f 6= 0 do beginchoose q, r ∈ R with(g = qf + r and ν(r) < ν(f))g ← f , f ← r

end-whileoutput g

That is given 0 6= a, b ∈ R we start the above algorithm that returns g ∈ Rand for this g we get g ∈ gcda, b. If now g denotes the greatest commondivisor g ∈ gcda, b of a and b, then there are r and s ∈ R such thatg = ra + sb. And these r and s can be computed (along with g) using thefollowing refinement of the Euclidean algorithm

input 0 6= a, b ∈ Rinitialisation if ν(a) ≤ ν(b)

then (f1 ← b, f2 ← a)else (f1 ← a, f2 ← b)

algorithm k ← 1repeat

k ← k + 1choose qk and fk+1 ∈ R with(fk−1 = qkfk + fk+1

and ν(fk+1) < ν(fk))until fk+1 = 0n← k, g ← fnr2 ← 1, s2 ← 0r3 ← −q2, s3 ← 1for k = 3 to n− 1 step 1 do begin

rk+1 ← rk−1 − qkrksk+1 ← sk−1 − qksk

end-forif ν(a) ≤ ν(b)then (r ← rn, s← sn)else (r ← sn, s← rn)

output g, r and s

(2.61) Example:As an application we want to compute the greatest common divisor of a = 84and b = 1330 in Z. That is we initialize the algorithm with f1 := b = 1330and f2 := a = 84. Then we find 1330 = 15 · 84 + 70 (at k = 2), whichis q2 = 15 and f3 = 70. Thus in the next step (now k = 3) we observe84 = 1 · 70 + 14, that is we may choose q3 := 1 and f4 := 14. Now (atcounter k = 4) we have 70 = 5 ·14+0 which means q4 = 5 and f5 = 0. Thusfor n := k = 4 we have finally arrived at fk+1 = 0 terminating the first partof the algorithm. It returns g = fn = 14 the greatest common divisor ofa = 84 and b = 1330. The following table summarizes the computations

116 2 Commutative Rings

Page 117: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

k fk−1 fk fk+1 qk2 1330 84 70 153 84 70 14 14 70 14 0 5

Now in the second part of the algorithm we initialize r2 := 1, s2 := 0,r3 := −q2 = −15 and s3 := 1. Then we compute r4 := r2 − q3r3 = 16 ands4 := s2−q3s3 = −1. As n = 4 this already is r = r4 = 16 and s = s4 = −1.And in fact ra+ bs = 14 = g. Again we summarize these computations in

k qk rk sk2 15 1 03 1 -15 14 5 16 -1

(2.62) Proposition: (viz. 435)Consider an integral domain (R,+, ·) and n ∈ N, then we recursively define

R0 := R∗ = a ∈ R | ∃ b ∈ R : ab = 1 Rn+1 := 0 6= a ∈ R | ∀ b ∈ R ∃ r ∈ Rn ∪ 0 : a | b− r

(i) The sets Rn form an ascending chain of subsets of R, i.e. for any n ∈ N

R0 ⊆ R1 ⊆ . . . ⊆ Rn ⊆ Rn−1 ⊆ . . .

(ii) If R(, ν) is an Euclidean domain that for any n ∈ N the set of a ∈ Rwith a 6= 0 and ν(a) ≤ n is contained in Rn, formally that is

a ∈ R | ν(a) ≤ n ⊆ Rn ∪ 0

(iii) Conversely if the Rn cover R, that is R \ 0 =⋃nRn, then (R,µ)

becomes an Euclidean domain under the following Euclidean function

µ : R \ 0 : a 7→ minn ∈ N | a ∈ Rn

And thereby we obtain the following statements for any a, b ∈ R, b 6= 0

µ(a) ≤ µ(ab)

µ(b) = µ(ab) ⇐⇒ a ∈ R∗

(iv) In particular we obtain the equivalency of the following two statements

R \ 0 =⋃n∈N

Rn ⇐⇒ ∃ ν : (R, ν) Euclidean domain

(v) If R(, ν) is an Euclidean domain, then (R,µ) is an Euclidean domain,too and for any a ∈ R we get µ(a) ≤ ν(a). That is µ is the minimalEuclidean function on R.

2.6 Principal Ideal Domains 117

Page 118: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.63) Definition:Let (R,+, ·) be any commutative ring, then an ideal a i R is said to beprincipal, iff it can be generated by a single element, i.e. a = aR for somea ∈ R. And R is said to be a principal ring iff every ideal of R is principal,that is iff

∀ a i R ∃ a ∈ R : a = aR

Finally R is said to be a principal ideal domain (which we will alwaysabbreviate by PID), iff it is an integral domain that also is principal, i.e.

(1) R is an integral domain

(2) R is a principal ring

(2.64) Remark:

• Clearly the trivial ideals a = 0 and a+R are always principal, as theycan be generated by the elements 0 = 0R and R = 1R respectively.

• If R is a principal ring and a i R is any ideal, then the quotient ringR/a is principal, too. Prob due to the correspondence theorem (1.61)any ideal u i R/a is of the form u = b/a for some ideal a ⊆ b i R.Thus there is some b ∈ R with b = bR. Now let u := b+a, then clearlyu = bh+ a | h ∈ R = u(R/a) is a principal ideal, too.

• Let R1, . . . , Rn be principal rings, then the direct sum R1 ⊕ · · · ⊕ Rnis a principal ring, too. Prob by (1.91) the ideals of the direct sum areof the form a1⊕· · ·⊕an for some ideals ai i Ri. Thus by assumptionthere are some ai ∈ Ri such that ai = aiRi. And thereby it is clear,that a1 ⊕ · · · ⊕ an = (a1, . . . , an)R1 ⊕ · · · ⊕Rn is principal again.

(2.65) Lemma: (viz. 437)

(i) If (R,+, ·) is a non-zero PID then all non-zero prime ideals of R alreadyare maxmial (in a fancier notation kdimR ≤ 1), formally that is

specR = smaxR ∪ 0

(ii) If (R,+, ·) is a PID then R already is a noetherian ring and an UFD.That is any 0 6= a ∈ R admits an (essentially uniqe) decompositiona = αp1 . . . pk into prime elements pi - viz. (2.50).

(iii) If (R, ν) is an Euclidean domain then R already is a PID. In fact if0 6= a i R is a non-zero ideal then a = aR for any a ∈ a satisfying

ν(a) = min ν(b) | 0 6= b ∈ a

(iv) If (R,+, ·) is an integral domain such that any prime ideal is principal,then R already is a PID. Put formally that is the equivalency

R PID ⇐⇒ ∀ p ∈ specR ∃ p ∈ R : p = pR

118 2 Commutative Rings

Page 119: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.66) Remark:

• In (2.59.(i)) we have seen that any field E is an Euclidean domain(under any ν). Hence any field is a PID due to (iii) above. But this isalso clear from the fact that the only ideals of a field E are 0 and Eitself. And these are principal due to 0 = 0E and E = 1E. Also (i) istrivially satisfied: the one and only prime ideal of a field is 0.

• The items (i) and (ii) in the above lemma are very useful and willyield a multitude of corollaries (e.g. the lemma of Kronecker that willgive birth to field theory in book II). And (iii) provides one of thereasons why we have introduced Euclidean domains: combining (iii)and (ii) we see that any Euclidean domain is an UFD. And this yieldsan elegant way of proving that a certain integral domain R is an UFD.Just establish an Euclidean function ν on R and voila R is an UFD.

• Due to example (2.59.(ii)) we know that (Z, α) is an Euclidean domain.By (iii) above this means that Z is a PID and (ii) then implies that Zis an UFD. By definition of UFDs this means that Z allows essentiallyunique prime decomposition. This now is a classical result that isknown as the Fundamental Theorem of Arithmetic. We have provedan even stronger version, namely

(R, ν) Euclidean domain =⇒ R is an UFD

• In the subsequent proposition we will discuss how far a PID devi-ates from being an Euclidean domain (though this is of little practicalvalue). To do this we will introduce the notion of a Dedekind-Hassenorm. We will then see that an Euclidean function can be easily turnedinto a Dedekind-Hasse norm and that R is a PID if and only if it ad-mits a Dedekind-Hasse norm. This of course is another way of provingthat any Euclidean domain is a PID.

(2.67) Proposition: (viz. 438)

(i) If (R,+, ·) is a PID then R admits a multiplicative Dedekind-Hassenorm. That is there is a function δ : R → N satisfying the followingthree properties for any a, b ∈ R

(1) δ(ab) = δ(a)δ(b)

(2) δ(a) = 0 ⇐⇒ a = 0

(3) if a, b 6= 0 then b ∈ aR or ∃ r ∈ aR+ bR such that δ(r) < δ(a)

Nota to be precise we may define δ(0) := 0 and for any 0 6= a ∈ Rwe let δ(a) := 2k for k := `(a) the length of any decomposition ofa = αp1 . . . pk into prime elements.

(ii) If (R,+, ·) is an integral domain and δ is a Dedekind-Hasse norm onR (that is δ : R → N is a function satisfying (2) and (3) in (i)), thenR already is a PID. In fact if 0 6= a i R is a non-zero ideal then weget a = aR for any 0 6= a ∈ a satisfying

δ(a) = min δ(b) | 0 6= b ∈ a

2.6 Principal Ideal Domains 119

Page 120: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) If (R, ν) is an Euclidean domain then we obtain a Dedekind-Hassenorm on R (that is δ : R→ N satisfies (2) and (3) in (i)) by letting

δ : R→ N : a 7→

0 if a = 0ν(a) + 1 if a 6= 0

(2.68) Lemma: (viz. 438) of BezoutLet (R,+, ·) be an integral domain and ∅ 6= A ⊆ R be a non-empty subsetof R. Then we obtain the following statements

(i) The intersection of the ideals aR (where a ∈ A) is a principal ideal ifand only if A admits a least common multiple. More precisely for anym ∈ R we obtain the following equivalency⋂

a∈AaR = mR ⇐⇒ m ∈ lcm(A)

(ii) If the sum of the ideals aR (where a ∈ A) is a principal ideal then Aadmits a greatest common divisor. More precisely for any d ∈ R weobtain the following equivalency∑

a∈AaR = dR =⇒ d ∈ gcd(A)

Nota the converse implication is untrue in general. E.g. regard thepolynomial ring R = Z[t], then 1 ∈ gcd(2, t) but the ideal 2R + tR isnot principal. The converse is true in PIDs however:

(iii) If R even is a PID then the sum of the ideals aR (where a ∈ A) is theprincipal ideal generated by the greatest common divisor of A. Moreprecisely for any d ∈ R we obtain the following equivalency∑

a∈AaR = dR ⇐⇒ d ∈ gcd(A)

In particular in a PID a greatest common divisor d ∈ gcd(A) can bewritten as a linear combination of the elements a ∈ A

∃ (ba) ∈ R⊕A : d =∑a∈A

aba

Nota in an Euclidean ring (R, ν) (supposed that A is finite) the Eu-clidean algorithm (2.60) provides an effective method to (recursively,using (2.54.(viii))) compute these elements ba.

(iv) Let R be a PID again and consider finitely many non-zero elements0 6= a1, . . . , an ∈ R such that the ai are pairwise relatively prime (thatis i 6= j ∈ 1 . . . n =⇒ 1 ∈ gcd(ai, aj)). If now b ∈ R then thereare b1, . . . , bn ∈ R such that we obtain the following equality (in thequotient field quotR of R)

b

a1 . . . an=

b1a1

+ · · ·+ bnan

120 2 Commutative Rings

Page 121: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

In an UFD R any collection ∅ 6= A ⊆ R of elements of R has a greatestcommon divisor d, due to (2.54.(v)). And we have just seen that in a PIDthe greatest common divisor d, that can even be written as a linear com-bination d =

∑a aba. In an Euclidean domain d and the ba can even be

computed algorithmically, using the Euclidean algorithm. We now ask forthe converse, that is: if any finite collection of elements of R has a greatestcommon divisor, that is a linear combination is R a PID already? The an-swer will be yes for noetherian rings, but no in general.

(2.69) Definition: (viz. 439)Let (R,+, ·) be an integral domain, then R is said do be a Bezout domainiff it satisfies one of the following three equivalent properties

(a) Any two elements a, b ∈ R admit a greatest common divisor, that canbe written as a linear combination of a and b. Formally that is

∀ a, b ∈ R ∃ r, s ∈ R : ra+ sb ∈ gcd a, b

(b) For any a, b ∈ R the ideal aR + bR is principal, that is there is somed ∈ R such that we get dR = aR+ bR. Formally again that is

∀ a, b ∈ R ∃ d ∈ R : dR = aR+ bR

(c) Any finitely generated ideal a i R of R is principal, formally that is

∀ a i R : a finitely generated =⇒ a principal

(2.70) Corollary: (viz. 440)Let (R,+, ·) be any ring ring, then the following statements are equivalent

(a) R is a PID

(b) R is an UFD and Bezout domain

(c) R is a noetherian ring and Bezout domain

(2.71) Example:

(i) Let R := Z + tQ[t] := f ∈ Q[t] | f(0) ∈ Z ⊆ Q[t] be the subringof polynomials over Q with constant term in Z. Then R is an integraldomain [as it is a subring of Q[t]] having the units R∗ = −1, 1 only[as fg = 1 implies f(0)g(0) = 1 and deg(f) = 0 = deg(g)]. However Rneither is noetherian, nor an UFD, as it contains the following infinitelyascending chain of principal ideals

tR ⊂ 1

2tR ⊂ . . . ⊂ 1

2ntR ⊂ 1

2n+1tR ⊂ . . .

[the inclusion is strict, as 2 is not a unit of R]. (Also the ideal tQ[t] iRis not finitely generated, as else there would be some a ∈ Z such thatatQ[t] ⊆ tZ[t], which is absurd). Yet it can be proved that R is aBezout domain (see [Dummit, Foote, 9.3, exercise 5] for hints on this).

2.6 Principal Ideal Domains 121

Page 122: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) We want to present another example of a Bezout domain. Fix anyfield (E,+, ·) and let S := E[tk | k ∈ N] be the polynomial ring incountably many variables. Then we define the following ideal

u := 〈tk − t2k+1 | k ∈ N 〉i i S

Then R := S/u is a Bezout domain (refer to [Dummit, Foote, 9.2,exercise 12] for hints). However it is no noetherian ring (in particularno PID), as the following ideal a of R is not finitely generated

a := 〈tk + u | k ∈ N 〉i i R

(iii) (♦) Let us denote denote the integral closure of Z in C by O. Thatis if we denote the set of normed polynomials over Z by Z[t]1 (that isZ[t]1 := tn + a1t

n−1 + · · ·+ an ∈ Z[t] | n ∈ N, ai ∈ Z), then

O := z ∈ C | ∃ f ∈ Z[t]1 : f(z) = 0

It will be proved in book II that O is a subring of C (and in particularan integral domain, as C is a field). In fact O is a Bezout domain thatsatisfies specO = smaxO ∪ 0 (see [Dummit, Foote, chapter 16.3,exercise 23] for hints how to prove this). However O neither is noethe-rian, nor an UFD, as it contains the following infinitely ascending chainof principal ideals

2O ⊂√

2O ⊂ . . . ⊂ 21/2nO ⊂ 21/2n+1O ⊂ . . .

(iv) Consider ω := (1 + i√

19)/2 ∈ C and consider the following subringZ[ω] := a + bω | a, b ∈ Z ⊆ C. Then we obtain a Dedekind-Hassenorm on Z[ω] by letting

δ : Z[ω]→ N : a+ bω 7→ a2 + ab+ 5b2

(this is proved in [Dummit, Foote, page 282]). In particular Z[ω] is aPID by virtue of (2.67.(ii)). However Z[ω] is not an Euclidean domain(under any function ν whatsoever). The latter statement is proved in[Dummit, Foote, page 277].

(2.72) Remark:By now we have studied a quite respectable list of different kinds of com-mutative rings from integral domains and fields over noetherian rings toBezout domains. Thereby we have found several inclusions, that we wish tosummarize in the diagram below

122 2 Commutative Rings

Page 123: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

integral domains∪

UFDs Bezout domains noetherian rings∪

PIDs ∪∪

Euclidean domains artinitan rings∪

fields

Note however that every single inclusion in this diagram is strict. E.g. thereare integral domains that are not Bezout domains and there are PIDs thatare not Euclidean domains. The following table presents examples of ringsR that satisfy one property, but not the next stronger one

R is is notZ[√−3] noetherian integral domain UFD

Z+ tQ[t] Bezout domain noetherian, UFDQ[s, t] noetherian UFD Bezout domain

Z[(1 +√−19)/2] PID Euclidean domainZ Euclidean domain field, artinian ring

2.6 Principal Ideal Domains 123

Page 124: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.7 Lasker-Noether Decomposition

(2.73) Definition: (viz. 440)Let (R,+, ·) be a commutative ring and a i R be a proper (i.e. a 6= R)ideal of R, then a is said to be a primary ideal of R, iff it satisfies one ofthe following equivalencies

(a) The radical of a contains the set of zero-divisors of R/a (as R-module)

zdR

(R/

a

)⊆√a

(b) In the ring R/a the set of zero-divisors is contained in the nil radical

zd(R/

a

)⊆ nil

(R/

a

)(c) The set of zero-divisors of R/a (as a ring) equals the nil-radical of R/a

zd(R/

a

)= nil

(R/

a

)(d) For any two elements a, b ∈ R of R we obtain the following implication

ab ∈ a, b 6∈ a =⇒ a ∈√a

(e) For any two elements a, b ∈ R of R we obtain the following implication

ab ∈ a, b 6∈√a =⇒ a ∈ a

And if a is a primary ideal, then it is customary to say that a is associatedto (the prime ideal)

√a. Finally the set of all primary ideals of R is said to

be the primary spectrum of R and we will abbreviate it by writing

spriR := a i R | a 6= R, (a)

And if p ∈ specR is a prime ideal of R, then we denote the set of all primaryideals of R associated to p, by

spripR :=a ∈ spriR

∣∣∣ √a = p

Nota clearly associateness is an equivalency relation on spriR. That is forany primary ideals a and b i R we let a ≈ b :⇐⇒

√a =√b and thereby

≈ is an equivalency relation on spriR. Further if p =√a, then spripR is

precisely the equivalency class of a under ≈.

124 2 Commutative Rings

Page 125: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.74) Remark:In the definition above there already occured the notion of zero-divisors of amodule (which will be introduced in 3.2). Thus we shortly wish to presentwhat this means in the context here. Let (R,+, ·) be a commutative ringand a i R be an ideal. Then the set of zero-divisors of R/a as an R-modulewill be defined to be

zdR

(R/

a

):=

a ∈ R

∣∣∣ ∃ 0 6= b+ a ∈ R/a : a(b+ a) = 0

=

a ∈ R

∣∣∣ ∃ 0 6= b+ a ∈ R/a : (a+ a)(b+ a) = 0

because of a(b + a) := ab + a = (a + a)(b + a) (this is just the definition ofthe scalar multiplication of R/a as an R-module). Now have a look at theset of zero-divisors of R/a

zd(R/

a

)=

a+ a ∈ R

/a

∣∣∣ ∃ 0 6= b+ a ∈ R/a : (a+ a)(b+ a) = 0

We find that the condition in both of these sets coincides. And it also isclear that a is contained in zdR(R/a) [unless a = R, as in this case we haveab ∈ a for b = 1 ∈ R]. Thus we find

zd(R/

a

)= zdR(R/a)/

a

(2.75) Proposition: (viz. 441)Let (R,+, ·) be a commutative ring and a i R be an ideal. Then we obtainthe following statements

(i) Any prime ideal of R already is primary, that is we obtain the inclusion

specR ⊆ spriR

(ii) If a is a primary ideal of R, then the radical√a is a prime ideal of R.

In fact it is the uniquely determined smallest prime ideal containinga, that is we obtain the following implications

a ∈ spriR =⇒√a ∈ specR

a ∈ spriR =⇒√

a

= p ∈ specR | a ⊆ p ∗

(iii) If m :=√a is a maximal ideal of R, then a is a primary ideal of R

associated to m, that is we obtain the following implication

√a ∈ smaxR =⇒ a ∈ spriR

(iv) Suppose that m ∈ smaxR is a maximal ideal of R, such that there issome k ∈ N such that mk ⊆ a ⊆ m. Then a is a primary ideal of Rassociated to m. That is we obtain the following implication

mk ⊆ a ⊆ m =⇒ a ∈ sprimR

2.7 Lasker-Noether Decomposition 125

Page 126: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) If a1, . . . , ak i R are finitely many primary ideals of R, all associatedto the same pirme ideal p =

√ai. Then their intersection is a primary

ideal associated to p, too

a1, . . . , ak ∈ spripR =⇒ a1 ∩ · · · ∩ ak ∈ spripR

(vi) If a i R is a primary ideal and u ∈ R \ a, then a : u i R is anotherprimary ideal, associated to the same prime ideal, that is

a ∈ spripR, u 6∈ a =⇒ a : u ∈ spripR

(vii) Consider a homomorphism ϕ : R→ S between the commutative rings(R,+, ·) and (S,+, ·). If now b i S is a primary ideal of S, then

a := ϕ−1(b) is a primary ideal of R. And if q :=√b denotes the ideal

b is associated to, then a is associated to ϕ−1(q). Formally that is

b ∈ spriq S =⇒ ϕ−1(b) ∈ spriϕ−1(q)R

(viii) (♦) Let U ⊆ R be a multiplicatively closed set, p i R be a primeideal of R with p ∩ U = ∅ and denote by q := U−1p i U

−1R thecorresponding prme ideal of U−1R. Then we obtain a one-to-one cor-respondence, by

spripR ←→ spriq U−1R

a 7→ U−1a

b ∩R ←[ b

(2.76) Example: (viz. 442)

(i) Let (R,+, ·) be an integral domain and p ∈ R be a prime element of R.Then for any 1 ≤ n ∈ N the ideal a := pnR is primary and associated,to the prime ideal

√a = pR.

(ii) In a PID (R,+, ·) the primary ideals are precisely the ideals 0 and pnRfor some p ∈ R prime. Formally that is the identity

spriR = pnR | 1 ≤ n ∈ N, p ∈ R prime ∪ 0

(iii) Primary ideals do not need to be powers of prime ideals. As an ex-ample let (E,+, ·) be any field and consider R := E[s, t]. Then theideal a := 〈s, t2 〉i is primary and associated to the maximal idealm := 〈s, t 〉i. In fact we get m2 ⊂ a ⊂ m and hence there is no primeideal p i R such that a = pk for some k ∈ N.

(iv) Powers of prime ideals do not need to be primary ideals. As an examplelet (E,+, ·) be any field again and consider R := E[s, t, u]/u whereu := 〈u2 − st 〉i. Let us denote a := s + u, b := t + u and c := u + u.Then we obtain a prime ideal p i R by letting p := 〈b, c 〉i. Howeverwe will see that a := p2 is no primary ideal.

126 2 Commutative Rings

Page 127: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.77) Definition:Let (R,+, ·) be a commutative ring, then p is said to be an irreducibleideal of R, iff it satisfies the following three properties

(1) p i R is an ideal

(2) p 6= R is proper

(3) p is not the intersection of finitely many larger ideals. That is for anytwo ideals a, b i R we obtain the implication

p = a ∩ b =⇒ p = a or p = b

(2.78) Theorem: (viz. 443)

(i) Let (R,+, ·) be a commutative ring, then any prime ideal of R alreadyis irreducible. That is for any ideal p i R we get the implication

p prime =⇒ p irreducible

(ii) Let (R,+, ·) be a noetherian ring, then any irreducible ideal of Ralready is primary. That is for any ideal p i R we get the implication

p irreducible =⇒ p primary

(iii) Let (R,+, ·) be a noetherian ring, then any proper ideal a i R, a 6= Radmits an irreducible decomposition (p1, . . . , pk). That is there areideals pi i R (where i ∈ 1 . . . k and 1 ≤ k ∈ N) such that

(1) ∀ i ∈ 1 . . . k : pi is an irreducible ideal

(2) a = p1 ∩ · · · ∩ pk

(2.79) Definition:Let (R,+, ·) be a commutative ring and a i R be an ideal of R. Then atupel (a1, . . . , ak) is said to be a primary decomposition of a, iff we have

(1) ∀ i ∈ 1 . . . k : ai ∈ spriR is a primary ideal of R

(2) a = a1 ∩ · · · ∩ ak

And (a1, . . . , ak) is said to be minimal (or irredundant) iff we further get

(3) ∀ j ∈ 1 . . . k : a 6=⋂i 6=j ai

(4) ∀ i 6= j ∈ 1 . . . k :√ai 6=

√aj

Finally if (a1, . . . , ak) is a minimal primary decomposition of a, then let usdenote pi :=

√ai. Then we define the set of associated prime ideals and

isolated and embedded components of (a1, . . . , ak) to be the following

ass(a1, . . . , ak) := p1, . . . , pk iso(a1, . . . , ak) := ai | i ∈ 1 . . . k, pi ∈ ass(a1, . . . , ak)∗

=ai∣∣∣ i ∈ 1 . . . k, ∀ j ∈ 1 . . . k : pj ⊆ pi =⇒ j = i

emb(a1, . . . , ak) := a1, . . . , ak \ iso(a1, . . . , ak)

2.7 Lasker-Noether Decomposition 127

Page 128: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Nota that is the isolated components belong to the minimal associatedprime ideals. And a ai is said to be an embedded component if it is notan isolated component, that is if its prime ideal pi is not minimal. Thesenotions have a geometric interpretation, whence the odd names come from.

(2.80) Example:Let (R,+, ·) be an UFD and a ∈ R with 0 6= a 6∈ R∗. Then we pick up aprimary decomposition of a, that is a = pn1

1 . . . pnkk where 1 ≤ k ∈ N and

for any i, j ∈ 1 . . . k we have pi ∈ R is prime, 1 ≤ ni ∈ N and piR = pjRimplies i = j. Then we obtain a minimal primary decomposition of a = aR

(a1, . . . , ak) where ai := pnii R

And for this primary decomposition we find pi =√ai = piR and furthermore

ass(a1, . . . , ak) := p1, . . . , pk iso(a1, . . . , ak) := a1, . . . , ak

(2.81) Proposition: (viz. 444)Let (R,+, ·) be a commutative ring and a i R be an ideal of R. If aadmits any primary decomposion, then it already admits a minimal primarydecomposition. More precisely let (a1, . . . , ak) be a primary decompositionof a. Then we select a subset I ⊆ 1 . . . k such that

#I = min

#J

∣∣∣∣∣∣ J ⊆ 1 . . . k, a =⋂j∈J

aj

Now define an equivalency relation on I by letting i ≈ j :⇐⇒

√ai =

√aj .

And denote the quotient set by A := I/ ≈. Then we have found a minimalprimary decomposition (aα) (where α ∈ A) of a by letting

aα :=⋂i∈α

ai

(2.82) Proposition: (viz. 445)Let (R,+, ·) be a commutative ring, a i R be a proper a 6= R ideal andconsider (a1, . . . , ak) a minimal primary decomposition of a. Let us furtherdenote pi :=

√ai, then we obtain the following statements

(i) Clearly the number of associated prime ideals of the decompositionequals the number of primeary ideals of the decomposition. Formally

#ass(a1, . . . , ak) = k

(ii) For any u ∈ R we obtain the following identity (where by conventionthe intersection over the empty set is defined to be the ring R itself)

√a : u =

⋂ pi | i ∈ 1 . . . k, u 6∈ ai

128 2 Commutative Rings

Page 129: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) (♦) If pi is minimal among ass(a1, . . . , ak) then we can recunstruct aifrom a and pi. To be precise for any i ∈ 1 . . . k we get

pi ∈ ass(a1, . . . , ak)∗ =⇒ a : (R \ pi) = ai

(iv) Let us introduce ass(a) := specR ∩ √a : u | u ∈ R the set of prime

ideals associated to a. Then we obtain the following identity

ass(a) = ass(a1, . . . , ak)

(v) (♦) Let us introduce iso(a) := a : (R \ p) | p ∈ ass(a)∗ the set ofisolated components of a. then we obtain the following identity

iso(a) = iso(a1, . . . , ak)

(2.83) Remark: (♦)Let (R,+, ·) be a commutative ring. In section 6.1 we will introduce thenotion of a prime ideal associated to an R-module M . Thus if a i R is anideal, then it makes sense to the regard the prime ideals associated to a asan R-module. And by definition this is just

sassR (a) = specR ∩ ann(b) | b ∈ a = specR ∩ 0 : b | b ∈ a

(where the equalty is due to ann(b) = 0 : b for any b ∈ R). Thus we findthat sassR (a) has absolutely nothing to do with the set ass(a) that has beendefined in the proposition above

sass(a) = specR ∩√

a : u∣∣∣ u ∈ R

However it will turn out that the set ass(a) is closely related to the set ofassociated prime ideals of the R-module R/a. By definition this is

sassR

(R/

a

)= specR ∩

ann(x)

∣∣∣ x ∈ R/a= specR ∩ ann(b+ a) | b ∈ R = specR ∩ a : b | b ∈ R

(the latter equality is due to ann(b + a) = a : b for any b ∈ R again). Thusthe difference between this set and ass(a) lies in the fact that ass(a) allowsthe taking of radicals. Hence it is easy to see that sassR (R/a) ⊆ ass(a). Insection 6.1 we will prove that for noetherian rings R we even have

sassR

(R/

a

)= ass(a)

2.7 Lasker-Noether Decomposition 129

Page 130: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.84) Corollary: (viz. 446) Lasker-NoetherLet (R,+, ·) be a noetherian ring, then any proper ideal a i R, a 6= Radmits a minimal primary decomposition. And in this decomposition theassociated prime ideals and the isolated components are uniquely determinedby a. Formally that is

(1) There are ideals ai i R (where i ∈ 1 . . . k and 1 ≤ k ∈ N) such that(a1, . . . , ak) is a minimal primary decomposition of a.

(2) If (a1, . . . , ak) and (b1, . . . , bl) are any two minimal primary decompo-sitions of a then we obtain the following three identities

k = l

ass(a1, . . . , ak) = ass(b1, . . . , bl)

iso(a1, . . . , ak) = iso(b1, . . . , bl)

(3) In particular if a =√a is a radical ideal, then the minimal primary

decomposition (p1, . . . , pk) of a is uniqeuly determined (up to ordering)and consists precisely of the prime ideals minimal over a, formally

p1, . . . , pk = p ∈ specR | a ⊆ p ∗

(2.85) Example:Fix any field (E,+, ·) and let R := E[s, t] be the polynomial ring overE in s and t (note that R is noetherian due to Hilbert’s basis theorem).Now consider the ideal a := 〈s2, st 〉i. Then we obtain two distinct primarydecompositions (a1, a2) and (b1, b2) of a by letting

ai bi pii = 1 〈s 〉i 〈s 〉i 〈s 〉ii = 2 〈s2, st, t2 〉i 〈s2, t 〉i 〈s, t 〉i

where pi :=√ai =

√bi. It is no coincidence that both primary decomposi-

tions have 2 elements and that the radicals pi are equal. Further p1 ⊂ p2,that is a1 = b1 are isolated components of a and a2 6= b2 are embedded com-ponents (of the decompositions). In fact this example demonstrates thatthe embedded components may be different.

ass(a) = sR, sR+ tR iso(a) = sR

(2.86) Corollary: (viz. 448)Let (R,+, ·) be a noetherian ring and a be a proper ideal of R, that isR 6= a i R. If now q i R is any prime ideal of R then we obtain

(i) a is contained in q iff q contains an associated prime ideal of a iff qcontains an isolated component of a. Formally that is

a ⊆ q ⇐⇒ ∃ p ∈ ass(a) : p ⊆ q

⇐⇒ ∃ i ∈ iso(a) : i ⊆ q

130 2 Commutative Rings

Page 131: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) The set of primes lying minimally over a is precisely the set of primesbelonging to the isolated components of a. Formally again

q ∈ specR | a ⊆ q ∗ = ass(a)∗

(iii) And thereby the radical of a is precisely the intersection of all (mini-mal) associated prime ideals of a. Put formally that is

√a =

⋂ass(a) =

⋂ass(a)∗

2.7 Lasker-Noether Decomposition 131

Page 132: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.8 Finite Rings

(2.87) Definition: (viz. 448)Let (R,+, ·) be any ring, denote its zero element by 0R and its unit elementby 1R. For any 1 ≤ k ∈ N let us denote kR := k1R = 1R+ · · ·+1R (k-times).Then there exists a uniquely determined homomrphism of rings ζ from Z toR, and this is given to be

ζ : Z→ R : k 7→

kR if k > 00R if k = 0

−(−k)R if k < 0

The image of ζ in R is said to be the prime ring of R and denoted byprrR. In fact it precisely is the intersection of all subrings of R. Formally

prrR := im(ζ) =⋂P | P ≤r R

And the kernel of ζ is an ideal in Z. Hence (as Z is a PID) there is auniquely determined n ∈ N such that kn(ζ) = nZ. This number n is saidto be the characteristic of R, denoted by

charR := n where kn(ζ) = nZ

That is if R has characteristic zero n = 0, then we get kR = 0 =⇒ k = 0.And if R has non-zero characteristic n 6= 0, then n is precisely the smallestnumber such that nR = 0. Formally that is

charR = min 1 ≤ k ∈ N | kR = 0

(2.88) Remark:If (F,+, ·) even is a field, we may also introduce the prime field of F(denoted by prfF ) to be thie intersection of all subfields of F . Formally

prfF :=⋂E | E ≤f F

And if is easy to see that prfF is precisely the quotient field of the primering prrF . Formally that is the following identity

prfF =ab−1 | a, b ∈ prrF, b 6= 0

Prob as E ≤f F implies E ≤r F (by definition) we have prrF ⊆ prfF .Let us now denote the quotioent field of prrF by Q ⊆ F . As prfF isa field and prr ⊆ prfF it is clear that Q ⊆ prfF . On the other handQ ≤f F is a subfield and hence we have prfF ⊆ Q by construction.Together this means prfF = Q as claimed.

132 2 Commutative Rings

Page 133: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.89) Proposition: (viz. 449)Let (R,+, ·) be any ring, then the characteristic of R satisfies the properties

(i) If R 6= 0 is a non-zero integral domain, then the characteristic of R isprime or zero. Formally that is the following implication

R 6= 0 integral domain =⇒ charR = 0 or prime

(ii) R has characteristic zero if and only if the prime ring of R is isomorphicto Z. More precisely we find the following equivalencies

charR = 0 ⇐⇒ ∀ k ∈ N : kR = 0 =⇒ k = 0

⇐⇒ Z ∼=r prrR : k 7→ ζ(k)

(iii) R has non-zero characteristic if and only if the prime ring of R isisomorphic to Zn (where n = charR). More precisely for any n ∈ Nwe find the equivalency of the following statements

charR = n 6= 0 ⇐⇒ n = min 1 ≤ k ∈ N | kR = 0 ⇐⇒ n 6= 0 and Zn ∼=r prrR : k + nZ 7→ ζ(k)

(iv) Now suppose that R is a field, then the prime field of R is given to be

charR = 0 ⇐⇒ Q ∼=r prfR

charR = n 6= 0 ⇐⇒ Zn ∼=r prfR

(v) Let (R,+, ·) be a finite ring (that is #R <∞), then the characteristicof R is non-zero and divides the number of elements of R, formally

0 6= charR | #R

(2.90) Proposition: (viz. 450)

(i) Let (R,+, ·) be a finite (that is #R < ∞), commutative ring. Thenthe non-zero divisors of R already are invertible, that is

R∗ = R \ zdR

(ii) Let (S,+, ·) be an integral domain and consider a prime element p ∈ S.Let further 1 ≤ n ∈ N and denote the quotient ring R := S/pnS.Then the set of zero-divisors of R is precisely the ideal generated bythe residue class of p. Formally that is

zdR = (p+ pnS)R

2.8 Finite Rings 133

Page 134: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) Now let (S,+, ·) be a PID, consider a prime element p ∈ S and letR := S/pnS again. Then we obtain the following statements

nilR = zdR = R \R∗ = (p+ pnS)R

specR = smaxR = nilR

(iv) Let (R,+, ·) be any noetherian (in particular commutative) ring andfix any 1 ≤ m ∈ N. Then there only are finitely many prime ideals pof R such that R/p has at most m elements. Formally

#p ∈ specR

∣∣∣ #R/p ≤ m

< ∞

(2.91) Remark: (♦)As in (iii) let (S,+, ·) be a PID, p ∈ S be a prime element and defineR := S/pnS again. Then we have seen, that nil(R) = R \ R∗ and we willsoon call a ring with such a property a local ring. To be precise: R is a localring with maximal ideal nil(R). Further - in a fancier language - the prop-erty specR = smaxR can be reformulated as ”R is a zero-dimensional ring”.

(2.92) Theorem: (viz. 453) of WedderburnLet (F,+, ·) be a ring with finitely many elements only (formally #F <∞),then the following three statements are equivalent

(a) F is a field

(b) F is a skew-field

(c) F is an integral domain

134 2 Commutative Rings

Page 135: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.9 Localisation

Localisation is one of the most powerful and fundamental tools in commu-tative algebra - and luckily one of the most simple ones. The general ideaof localisation is to start with a commutative ring (R,+, ·), to designate asufficient subset U ⊆ R and to enlarge R to the ring U−1R in which theelements of u ∈ U ⊆ R ⊆ U−1R now are units. And it will turn out,that in doing this the ideal structure of U−1R will be easier than that ofR. And conversely we can restore information of R by regarding sufficientlymany localisations of R (this will be the local-global-principle). These twoproperties gives the method of localizing its punch. In fact commutativealgebra has been most successful with problems that allowed localisation.So let us finally begin with the fundamental definitions:

(2.93) Definition:Let (R,+, ·) be any commutative ring, then a subset U ⊆ R is said to bemultiplicatively closed iff it satisfies the following two properties

1 ∈ U

u, v ∈ U =⇒ uv ∈ U

And U ⊆ R is said to be a saturated mutilplicative system, iff we even get

1 ∈ U

u, v ∈ U =⇒ uv ∈ Uuv ∈ U =⇒ u ∈ U

(2.94) Proposition: (viz. 455)

(i) In a commutative ring (R,+, ·) the set R∗ of units and the set nzdR ofnon-zero-divisors of R both are saturated multiplicatively closed sets.

(ii) If U ⊆ R is a saturated, multiplicatively closed set, then R∗ ⊆ U , asfor any u ∈ R∗ we get uu−1 = 1 ∈ U and hence u ∈ U .

(iii) If U , V ⊆ R are multiplicatively closed sets in the commutative ringR, then UV := uv | u ∈ U, v ∈ V is multiplicatively closed, too.

(iv) If a i R is an ideal, then 1+a ⊆ R is a multiplicatively closed subset.

(v) Let ϕ : R → S be a ring-homomorphism between the commutativerings R and S and consider U ⊆ R and V ⊆ V . Then we get

U mult. closed =⇒ ϕ(U) mult. closedV mult. closed =⇒ ϕ−1(V ) mult. closedV saturated =⇒ ϕ−1(V ) saturated

(vi) If U ⊆ P(R) is a chain (under ⊆ ) of multiplicatively closed subsetsof R, then the union

⋃U ⊆ R is a multiplicatively closed set again.

2.9 Localisation 135

Page 136: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vii) If U ⊆ P(R) is a non-empty (U 6= ∅) family of [saturated] multi-plicatively closed subsets of R, then the intersection

⋂U ⊆ R is a

[saturated] multiplicatively closed set, too.

Nota this allows to define the multiplicative closure of a subset A ofR to be

⋂U ⊆ R | A ⊆ U,U multiplicatively closed . Likewise we

may define the saturated multiplicative closure of A to be the interse-tion of all saturated multiplicatively closed subsets containing A.

(viii) Consider any u ∈ R, then the set U :=uk | k ∈ N

clearly is a

multiplicatively closed set. In fact it is the smallest multiplicativelyclosed set containing u. That is U is the multiplicative closure of u .

(ix) Let U ⊆ R be a multiplicatively closed subset of R, then the saturatedmultiplicative closure U of U can be given explictly to be the set

U = a ∈ R | ∃ b ∈ R,∃u, v ∈ U : uab = uv

(2.95) Proposition: (viz. 456)Let (R,+, ·) be any commutative ring and U ⊆ R be a subset. Then Uis saturated multiplicatively closed, iff it is the complement of a union ofprime ideals of R. Formally that is the equivalence of

(a) ∃ P ⊆ specR such that U = R \⋃P

(b) U ⊆ R is a saturated multiplicatively closed subset

(2.96) Definition: (viz. 457)Let (R,+, ·) be any commutative ring and U ⊆ R be a multiplicativelyclosed subset of R. Then we obtain an equivalence relation ∼ on the setU ×R by virtue of (with a, b ∈ R and u, v ∈ U)

(u, a) ∼ (v, b) :⇐⇒ ∃w ∈ U : vwa = uwb

And we denote the quotient of U×R modulo ∼ by U−1R and the equivalenceclass of (u, a) is denoted by a/u, formally this is

a

u:= (v, b) ∈ U ×R | (u, a) ∼ (v, b)

U−1R := U ×R/∼

Now U−1R is a commutative ring under the following algebraic operations

a

u+b

v:=

av + bu

uva

u· bv

:=ab

uv

It is clear that under these operations the zero-element of U−1R is given tobe 0/1 and the unit element of U−1R is given to be 1/1. And we obtain acanonical ring homomorphism from R to U−1R by letting

κ : R→ U−1R : a 7→ a

1

136 2 Commutative Rings

Page 137: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

In general this canonical homomorphism need not be and embedding (i.e. in-jective). Yet we do obtain the following equivalencies

κ injective ⇐⇒ U ⊆ nzdR

κ bijective ⇐⇒ U ⊆ R∗

(2.97) Remark:

• Though the equivalence relation ∼ in the above definition might looka little artificial at first sight nothing here is mythical or mere chance.To see this let us first consider the case of an integral domain (R,+, ·).Then we may divide by w and hence we get

a

u=b

v⇐⇒ va = ub

and this is just what we expect, if we multiply the left-hand equationby uv. To allow a common factor w is just the right way to deal withzero divisors in R. For convenience we repeat the defining property ofthe quotient a/u for general commutative rings (R,+, ·)

a

u=b

v⇐⇒ ∃w ∈ U : vwa = uwb

• And from this equivalence it is immediately clear that fractions a/uin U−1R can be reduced by common facors v of U . That is for anyelements a ∈ R and u, v ∈ U we have the equality

av

uv=

a

u

And this allows to express sums by going to a common denominator.That is consider a1, . . . , an ∈ R and u1, . . . , un ∈ U . Then we denoteu := u1 . . . un ∈ U and ui := u/ui ∈ U (e.g. u1 = u2 . . . un). Then wehave just remarked that ai/ui = (aiui)/u and thereby

a1

u1+ · · ·+ an

un=

a1u1 + · · ·+ anunu

• This equivalence also yields that 1/1 = 0/1 holds true in U−1R if andonly if the multiplicatively closed set contains zero 0 ∈ U . But as 0/1is the zero-element and 1/1 is the unit element of U−1R we therebyfound another equivalence

U−1R = 0 ⇐⇒ 0 ∈ U

• As we have intended, localizing turns the elements u ∈ U into units.More precisely if U ⊆ R is multiplicatively closed, then we obtain(

U−1R)∗ ⊇

auv

∣∣∣ a ∈ R∗, u, v ∈ U 2.9 Localisation 137

Page 138: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Yet the equality need not hold true! As an example consider R = Z

and U =

6k | k ∈ N

= 1, 6, 36, . . . . Then 2/1 is invertible, withinverse (2/1)−1 = 3/6, yet 2 is not of the form R∗U =

±6k | k ∈ N

.

Prob consider any au/v such that a ∈ R∗ and u, v ∈ U , then we get(au/v)(a−1v/u) = (aa−1uv)/(uv) = (uv)/(uv) = 1/1 and hence au/vis a unit, with inverse (au/v)−1 = a−1v/u.

• If U ⊆ R even is a saturated multiplicatively closed subset, then weeven obtain a complete description of the units of U−1R to be(

U−1R)∗

= uv

∣∣∣ u, v ∈ U Prob clearly u/v is a unit of U−1R, since (u/v)−1 = v/u. Converselyif b/v is a unit of U−1R there is some a/u ∈ U−1R such that ab/uv =(a/u)(b/v) = 1/1. That is there is some w ∈ U such that abw = uvw.Hence b(aw) = uvw ∈ U and as U is saturated this implies b ∈ U .

• Let (R,+, ·) be a commutative ring and U ⊆ R be a multiplicativelyclosed set, such that U−1R is finitely generated as an R-module. Thatis we suppose there are b1, . . . , bn ∈ R and v1, . . . .vn ∈ U such that

U−1R =

a1b1v1

+ · · ·+ anbnvn

∣∣∣∣ a1, . . . , an ∈ R

Then the canonical mapping κ : R → U−1R : b 7→ b/1 is surjective.Prob consider any a/u ∈ U−1R and denote v := v1 . . . vn ∈ U andvi := v/vi ∈ V (e.g. v1 = v2 . . . vn). As uv ∈ V and U−1R is generatedby the bi/v1 there are a1, . . . , an ∈ R such that

a

uv=

a1b1v1

+ · · ·+ anbnvn

=a1b1v1 + · · ·+ anbnvn

v

That is there is some w ∈ U such that vwa = uvwb where we let b :=a1b1v1 + · · · + anbnvn ∈ R. That is we have found (vw)1a = (vw)ub,thus by definition we have κ(b) = b/1 = a/u. And as a/u has beenarbitrary this is the surjectivity of κ.

(2.98) Example:Let now (R,+, ·) be any commutative ring. Then we present three examplesthat are of utmost importance. Hence the reader is asked to regard theseexamples as definitions of the objects Ru, quotR and Rp.

• If R is an integral domain and U ⊆ R is a multiplicatively closed set,then the construction of U−1R can be simplified somewhat. First letB := (R \ 0) × R, then we obtain an equivalency relation on B byletting (a, u) ∼ (b, v) :⇐⇒ av = bu. Let us denote the eqivalencyclass of (a, u) by a/u := [(a, u)]. Then we have regained the quotientfield, that has already been introduced in section 1.4

quotR = ab

∣∣∣ a, b ∈ R, b 6= 0

138 2 Commutative Rings

Page 139: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Note that this is precisely the construction that is used to obtainQ from Z. If now U ⊆ R is an arbitrary multiplicatively closedset satisfying 0 6∈ U , then the localisation of R in U is canonicallyisomorphic to the following subring of the quotient field:

U−1R ∼=r

ab∈ quotR

∣∣∣ a ∈ R, u ∈ U :a

u7→ a

u

Prob we will first prove the well-definedness and injectivity: by defi-nition a/u = b/v in U−1R is equivalent to w(av − bu) = 0 for somew ∈ U . But as 0 6∈ U we get w 6= 0 and hence av = bu, as R is anintegral domain. This again is just a/u = b/v in quotR. The surjec-tivity of the map is obvious and it clearly is a homomorphism, as thealgebraic operations are literally the same.

• Let us generalize the previous construction a little: it is clear that theset U := nzdR of non-zero divisors of R is a saturated multiplicativelyclosed set. Thus we may define the total ring of fractions of acommutative ring R to be the localisation in this set

quotR := (nzdR)−1R

Note that in case of an integral domain we have nzdR = R \ 0 andhence we obtain precisely the same ring as before. Further note thatby the very nature of U = nzdR the canonical homomorphism κ isinjective. That is we can always regard R as a subring in quotR

R → quotR : a 7→ a/1 is injective

And as nzdR is saturated we can give the units of quotR explicitly

(quotR)∗ = uv

∣∣∣ u, v ∈ nzdR

In particular we find that quotR is a field if and only if nzdR =R\ 0 and this is precisely the property ofR being an integral domain.That is we obtain the equivalence

R integral domain ⇐⇒ quotR field

• Consider some field (F,+, ·) and a subring R ≤r F . In particularR 6= 0 is a non-zero integral domain and hence quotR is a field. Letus now denote the subfield of F generated by R by E, explictly

E :=ab−1 | a, b ∈ R

Then is is straightforward to check that the algebraic operations onthe quotient field quotR coincide with those of the field E. That iswe obtain a well-defined isomorphism of rings

quotR ∼=r E :a

b7→ ab−1

2.9 Localisation 139

Page 140: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Consider any u ∈ R, then clearly U :=

1, u, u2, u3, . . .

is multiplica-tively closed. Hence we obtain a commutative ring by

Ru :=uk

∣∣∣ k ∈ N−1R

in which u = u/1 is invertible (with inverse 1/u). By the remarksabove it is clear that Ru = 0 is zero if and only if u is a nilpotent ofR. And R → Ru is embedded under κ if and only if u is a non-zerodivisor of R, respectively isomorphic if u is a unit. Altogether

Ru = 0 ⇐⇒ u ∈ nilR

κ : R → Ru ⇐⇒ u ∈ nzdR

κ : R ∼=r Ru ⇐⇒ u ∈ R∗

• In (2.9) and (in a more general version) in (2.95) we have seen, thatthe complement R\p of a prime ideal p i R is multiplicatively closed.Thus for prime ideals p we may always define the localisation

Rp := (R \ p)−1R

And by going to complements it is clear that R is embedded into Rp

under κ, iff p is contained in the set of zero divisors of R

κ : R → Rp ⇐⇒ p ⊆ zdR

In any case - as R \ p is saturated multiplicatively closed subset wecan explicitly describe the group of units of Rp to be the following

(Rp

)∗=

uv

∣∣∣ u, v 6∈ p

• If (R,+, ·) is an integral domain and p i R is a prime ideal of Rthen we get an easy alternative description of the localised ring Rp:let E := quotR be the quotient field of R, then we denote:

Ep := ab∈ E

∣∣∣ b 6∈ p

And thereby we obtain a canonical isomorphy from the localised ringRp to Ep by virtue of a/b 7→ a/b. (One might hence be tempted to sayRp = Ep, but - from a set-theoretical point of view - this is not quitetrue, as the equivalency class a/b ∈ Rp only is a subset of a/b ∈ Ep)

Rp∼=r Ep :

a

b7→ a

b

Note that under this isomorphy the maximal ideal mp of Rp inducedby p (that is mp := pRp = (R \ p)−1p) corresponds to

mp ←→ ab∈ E

∣∣∣ a ∈ p, b 6∈ p

140 2 Commutative Rings

Page 141: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.99) Lemma: (viz. 458) Local-Global-PrincipleLet (R,+, ·) be any commutative ring and a, b ∈ R, then equivalent are

a = b ⇐⇒ ∀ p ∈ specR :a

1=b

1∈ Rp

⇐⇒ ∀m ∈ smaxR :a

1=b

1∈ Rm

Further if R is an integral domain and F := quotR is its quotient field,then we can embed R into E, as R = a/1 ∈ F | a ∈ R . Likewise if p i Ris a prime ideal then we embed the localisation into F as well, as Rp := a/u ∈ F | a ∈ R, u 6∈ p . And thereby we obtain the following identities(as subsets of F )

R =⋂

p∈specR

Rp =⋂

m∈smaxR

Rm

(2.100) Proposition: (viz. 459) Universal PropertyLet (R,+, ·) and (S,+, ·) be commutative rings, U ⊆ R be a multiplica-tively closed subset and ϕ : R→ S be a homomorphism of rings, such thatϕ(U) ⊆ S∗. Then there is a uniquely determined homomorphism of ringsϕ : U−1R→ S such that ϕκ = ϕ. And this is given to be

ϕ : U−1R→ S :a

u7→ ϕ(a)ϕ(u)−1

(2.101) Remark:Let (R,+, ·) be a commutative ring and U , V ⊆ R be multiplicatively closedsets, such that U ⊆ V . Further consider the canonical homomorphismκ : R → V −1R : a 7→ a/1. Then it is clear that κ(U) ⊆ (V −1R)∗, as theinverse of u/1 is given to be 1/u. Thus by the above proposition we obtainan induced homomorphism

κ : U−1R→ V −1R :a

u7→ a

u

Thereby κ(a/u) = 0/1 iff there is some v ∈ V such that va = 0. And this isprecisely the same property for κ(a) = 0/1. Hence the kernel of κ is just

kn(κ) = U−1(kn(κ)) = au

∣∣∣ a ∈ kn(κ), u ∈ U

(2.102) Definition: (viz. 459)Let (R,+, ·) be a commutative ring, U ⊆ R be a multiplicatively closedset and denote the canonical homomorphism, by κ : R → U−1R : a 7→ a/1again. If now a i R and u i U

−1R are ideals then we define the transferedideals u ∩R i R and U−1a i U

−1R to be the following

u ∩R := κ−1(u) =a ∈ R

∣∣∣ a1∈ u

U−1a := 〈κ(a) 〉i =

au

∣∣∣ a ∈ a, u ∈ U

2.9 Localisation 141

Page 142: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.103) Example:Let (R,+, ·) be any commutative ring, a ∈ R and U ⊆ R be multiplicativelyclosed. Then an easy, straightforward computation shows

U−1(aR)

=a

1U−1R

(2.104) Definition: (viz. 460)Let (R,+, ·) be a commutative ring, U ⊆ R be a multiplicatively closed setand a i R be an ideal of R. Then we obtain another ideal a : U i R by

a : U := b ∈ R | ∃ v ∈ U : vb ∈ a

And clearly a : U i R thereby is an ideal of R containing a ⊆ a : U andusing the notation of (2.102) we finally obtain the equality(

U−1a)∩R = a : U

(2.105) Proposition: (viz. 460)Let (R,+, ·) be a commutative ring and U ⊆ R be a multiplicatively closedsubset. Further let a, b i R be any two ideals of R, then we obtain

U−1(a ∩ b

)=

(U−1a

)∩(U−1b

)U−1

(a + b

)=

(U−1a

)+(U−1b

)U−1

(a b)

=(U−1a

) (U−1b

)U−1√a =

√U−1a

(a : U

): U = a : U(

a ∩ b)

: U =(a : U

)∩(b : U

)√a : U =

√a : U

Conversely consider any two ideals u, w i U−1R and abbreviate a := u∩R

and b := w ∩R i R. Then we likewise get the identities(u ∩ w

)∩R = a ∩ b(

u + w)∩R =

(a + b

): U(

uw)∩R =

(a b)

: U√u ∩R =

√a

142 2 Commutative Rings

Page 143: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.106) Example:The equality (a + b) : U = (a : U) + (b : U) need not hold true. As anexample consider R = Z, U := 1, 2, 4, 8, . . . , a = 9Z and b = 15Z. Thenit is clear that a : U = a and b : U = b such that (a : U) + (b : U) = 3Z (as 3is the greatest common divisor of 9 and 15). Yet 13 ·2 = 26 = 9+15 ∈ a+b.Thus 13 ∈ (a + b) : U even though 13 6∈ 3Z = (a : U) + (b : U).

(2.107) Definition: (viz. 462)Let (R,+, ·) be a commutative ring and U ⊆ R be a multiplicatively closedset in R. Then we define the following sets of ideals in R

U−1idealR := a ∈ idealR | a = a : U = a ∈ idealR | u ∈ U, ua ∈ a =⇒ a ∈ a

U−1sradR := sradR ∩(U−1idealR

)U−1specR := specR ∩

(U−1idealR

)= p ∈ specR | p ∩ U = ∅

U−1smaxR := smaxR ∩(U−1idealR

)= m ∈ smaxR | m ∩ U = ∅

(2.108) Proposition: (viz. 462)Let (R,+, ·) be a commutative ring and U ⊆ R be a multiplicatively closedset in R. Then we obtain an order-preserving one-to-one correspondencebetween the ideals of the localisation U−1R and U−1idealR, formally

ideal (U−1R) ←→ (U−1idealR)

u 7→ u ∩RU−1a ←[ a

• This correspondence is order-preserving, that is for any two ideals a,b ∈ U−1idealR and u, w ∈ idealU−1R respectively, we get

a ⊆ b =⇒ U−1a ⊆ U−1b

u ⊆ w =⇒ u ∩R ⊆ w ∩R

• And this correspondence correlates maximal, prime and radical idealsof U−1R with U−1smaxR, U−1specR respectively with U−1sradR

ideal (U−1R) ←→ (U−1idealR)

∪ ∪srad (U−1R) ←→ U−1(sradR)

∪ ∪spec (U−1R) ←→ U−1(specR)

∪ ∪smax (U−1R) ←→ U−1(smaxR)

2.9 Localisation 143

Page 144: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.109) Example:

• Thus the ideal structure of U−1R is simpler than that of R itself.The most drastic example is the following Z has a multitude of ideals(namely all aZ where a ∈ N, which are countably many). Yet Q =quotZ if a field and hence only has the trivial ideals 0 and Q itself.

• Let now (R,+, ·) be any commutative ring and choose any u ∈ R, thenthe spectrum of Ru corresponds (under the mapping u 7→ u∩R of theproposition above) to the following subset of the spectrum of R

spec (Ru) ←→ p ∈ spec (R) | u 6∈ p

Prob this is clear as for prime ideals p of R we have the equivalenceu ∈ p ⇐⇒

u, u2, u3, . . .

⊆ p ⇐⇒ p ∩

u, u2, u3, . . .

6= ∅ such

that the claim is immediate from (2.108).

• Let q ∈ specR be a fixed prime ideal of R, then (for any other primeideal p of R) clearly p∩ (R \ q) = ∅ is equivalent to p ⊆ q. Using thisin the above proposition (2.108) we find the correspondence

spec (Rq) ←→ p ∈ spec (R) | p ⊆ q

Thus localizing in q simply preserves the prime ideals below q anddeletes all the prime ideals beyond q. Note that this is just the oppositeof the quotient R/q - here all the ideals below q are cut off. That islet P := p ∈ specR | p ⊆ q or q ⊆ p be the set of all prime idealsof R comparable with q. Then the situation can be illustrated in thefollowing way:

spec(Rq)

qp ⊆ q q ⊆ p

spec(R/q)0

qRq

(2.110) Corollary: (viz. 466)Let (R,+, ·) be a commutative ring and U ⊆ R be a multiplicatively closedsubset with 0 6∈ U . Further let us abbreviate by ? any one of the wordsintegral domain, noetherian ring, artinian ring, UFD, PID, DKD or normalring. Then we get the following implication

R is a ? =⇒ U−1R is a ?

144 2 Commutative Rings

Page 145: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.111) Corollary: (viz. 468)Let (R,+, ·) be an integral domain and denote its quotient field by E :=quotR. Then the following three statements are equivalent

(a) R is normal

(b) ∀ p ∈ specR : Rp is normal

(c) ∀m ∈ smaxR : Rm is normal

(2.112) Proposition: (viz. 463)Let (R,+, ·) be any commutative ring, then we present a couple of usefulisomorphies concerning localisations: fix a multiplicatively closed set U ⊆ Rand some point u ∈ R then we obtain

(i) The localisation of the polynomial ring R[t] is isomorphic to the poly-nomial ring over the localized ring, formally that is

U−1(R[t]

) ∼=r (U−1R)[t]

f

u7→

∞∑α=0

f [α]

utα

(ii) The localized ring Ru is just a quotient of some polynomaial ring of R

Ru ∼=rR[t]/

(ut− 1)R[t]a

uk7→ atk + (ut− 1)R[t]

f(1/u) ←[ f + (ut− 1)R[t]

(iii) Let (R,+, ·) be an integral domain and 0 6∈ U ⊆ R be a multiplica-tively closed subset of R not containing 0. Then the quotient fields ofR and U−1R are isomorphic under

quotR ∼=r quotU−1R :a

b7→ a/1

b/1

(iv) Let (R,+, ·) be an integral domain and 0 6= a, b ∈ R be two non-zeroelements of R and let n ∈ N. Then we obtain the following isomorphy

(Ra

)b/an

∼=r Rab :x/ai

(b/an)j7→ xa(n+1)jbi

(ab)i+j

(v) Consider U ⊆ R multiplicatively closed and a i R any ideal of Rsuch that U ∩ a = ∅. Now denote U/a := u+ a | u ∈ U , then weobtain the following isomorphy

U−1R/U−1a

∼=r

(U/

a

)−1 (R/a

)b

v+ U−1a 7→ b+ a

v + a

2.9 Localisation 145

Page 146: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vi) In particular: let p i R be a prime ideal and a i R be any idealof R such that a ⊆ p. Then we denote ap := (R \ p)−1a and therebyobtain the following isomorphy

Rp/ap∼=r

(R/

a

)p/a

:b

u+ ap 7→

b+ au+ a

(vii) Let p i R be a prime ideal of R and denote p := (R \ p)−1p. If nowa 6∈ p is not contained in p then we obtain the isomorphy

Rp∼=r (Ra)p :

a

u7→ r/1

u/1

146 2 Commutative Rings

Page 147: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.10 Local Rings

(2.113) Definition: (viz. 471)Let (R,+, ·) be any commutative ring, then R is said to be a local ring, iffit satisfies one of the following two equivalent properties

(a) R has precisely one maximal ideal (i.e. # smaxR = 1)

(b) the set of non-units is an ideal in R (i.e. R \R∗ i R)

Nota and in this case the uniquely determined maximal ideal of R is pre-cisely the set R \R∗ = a ∈ R | aR 6= R of non-units of R.

(2.114) Proposition: (viz. 471)Let (R,+, ·) be a commutative ring and m i R be a proper ideal of R (thatis m 6= R). Then the following four statements are equivalent

(a) R is a local ring with maximal ideal m

(b) ∀ a i R : a 6= R =⇒ a ⊆ m

(c) R \m = R∗

(d) R \m ⊆ R∗

(2.115) Example:

• If (F,+, ·) is a field, then F \ 0 = F ∗, in particular F already is alocal ring having the maximal ideal 0.

• Let (E,+, ·) be a field and consider the ring of formal power seriesR := E[[t]] over E. Then E[[t]] is a local ring having the maximal ideal

m := t E[[t]] = f ∈ E[[t]] | f [0] = 0

To prove this it suffices to check R \ m ⊆ R∗ (due to the aboveproposition (2.114)). But in fact if f ∈ E[[t]] is any power series withf [0] 6= 0 then an elementary computation shows that we can iterativelycompute the inverse of f , by

f−1[0] =1

f [0]

f−1[k] =−1

f [0]

k−1∑j=0

f [k − j]f−1[j]

• Let (R,+, ·) be a commutative ring and p i R be any prime ideal ofR. Then in (2.116) below we will prove that the localised ring Rp is alocal ring having the maximal ideal

mp := (R \ p)−1p = p/u | p ∈ p, u 6∈ p

2.10 Local Rings 147

Page 148: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Let (S,+, ·) be a PID, 1 ≤ n ∈ N and p ∈ S be a prime element ofS. Then we have proved in (2.90.(iii)) that R := S/pnS is a local ringhaving the maximal ideal

nilR = zdR = R \R∗ = (p+ pnS)R

(2.116) Proposition: (viz. 471)Let (R,+, ·) be a commutative ring and p i R be a prime ideal. Then Rp

is a local ring (i.e. a ring with precisely one maximal ideal). Thereby themaximal ideal of Rp is given to be

mp := (R \ p)−1p = p/u | p ∈ p, u 6∈ p

And the residue field of Rp is isomorphic to the quotient field of R/p via

Rp/mp

∼=f quotR/p :

a

u+ mp 7→

a+ pu+ p

(2.117) Proposition: (viz. 487) (♦)Let (R,+, ·) be a local, noetherian ring with maximal ideal m and denotethe residue field of R by E := R/m. We now regard m as an R-module, thenm/m2 becomes an E-vector space under the following scalar multiplication:

(a+ m)(m+ m2) := (am) + m2

And if we now denote by rankR(m) the minimal number k ∈ N such thatthere are elements m1, . . . ,mk ∈ m with m = Rm1 + · · ·+Rmk, then we get

dimE

(m/

m2

)= rankR(m)

(2.118) Definition:Let (R,+, ·) be any non-zero (i.e. R 6= 0), commutative ring, then a mappingν : R → N ∪ ∞ is called a valuation (and the ordered pair (R, ν) avalued ring) if for any a, b ∈ R we obtain the following properties

(1) ν(a) =∞ ⇐⇒ a = 0

(2) ν(ab) = ν(a) + ν(b)

(3) ν(a+ b) ≥ min ν(a), ν(b)

And a valuation ν is said to be normed iff it further satisfies the property

(N) ∃ a ∈ S : ν(a) = 1

148 2 Commutative Rings

Page 149: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.119) Proposition: (viz. 472)

(i) Let (R, ν) be a valued ring (in particular R 6= 0), then R and itsvaluatuion ν already satisfy the following additional properties

(U) α ∈ R∗ =⇒ ν(α) = 0

(I) R is an integral domain

(P) [ν] := a ∈ R | ν(a) ≥ 1 ∈ specR

(ii) If (R,+, ·) is a non-zero integral domain then R always carries thetrivial valuation τ : R → N ∪ ∞ defined by τ(a) := 0 iff a 6= 0and τ(a) :=∞ iff a = 0. Note that this valuation is not normed.

(iii) Let (R,+, ·) be a noetherian integral domain and consider any p ∈ Rprime. Then we obtain a normed valuation ν = νp on R by

ν : R→ N ∪ ∞ : a 7→ a[p]

where a[p] := sup k ∈ N | pk | a

In particular we have ν(p) = 1 and [ν] = pR. And for any a, b ∈ Rthis valuation satisfies a fifth property, namely we get

ν(a) 6= ν(b) =⇒ ν(a+ b) = min ν(a), ν(b)

(iv) Let (R,+, ·) be a PID then we obtain a one-to-one correspondencebetween the maximal spectrum of R and the normed valuations on R:

smaxR ←→ ν : R→ N ∪ ∞ | (1), (2), (3), (N) m 7→ νp where m = pR

[ν] ←[ ν

(2.120) Definition:Let (R,+, ·) be a commutative ring, then we call ν : R → N ∪ ∞ a dis-crete valuation on R (and the ordered pair (R, ν) is said to be a discretevaluation ring), iff ν satisfies all of the following properties (∀ a, b ∈ R)

(1) ν(a) =∞ ⇐⇒ a = 0

(2) ν(ab) = ν(a) + ν(b)

(3) ν(a+ b) ≥ min ν(a), ν(b)

(4) ν(a) ≤ ν(b) =⇒ a | b

(N) ∃m ∈ R : ν(m) = 1

2.10 Local Rings 149

Page 150: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.121) Proposition: (viz. 473)Let (R, ν) be a discrete valuation ring, then R and its valuation ν alreadysatisfy all of the following additional properties

(i) (R, ν) is a valued ring, in particular R is a non-zero integral domain.

(ii) R is a local ring with maximal ideal [ν] = a ∈ R | ν(a) ≥ 1.

(iii) (R, ε) is an Euclidean domain under ε : R \ 0 7→ N : a 7→ ν(a).

(iv) Fix any m ∈ R with ν(m) = 1, then for any 0 6= a ∈ R there is auniquely detemined unit α ∈ R∗ such that a = αmk for k = ν(a).

(v) According to (iii) and (iv) the set of ideal of R is precisely the following

idealR =mkR

∣∣∣ k ∈ N ∪ 0

(vi) Let (F,+, ·) be an arbitrary field and ν : F → Z ∪ ∞ be a functionsatisfying the following four properties (for any x, y ∈ F )

(1) ν(x) =∞ ⇐⇒ x = 0

(2) ν(xy) = ν(x) + ν(y)

(3) ν(x+ y) ≥ min ν(x), ν(y)

(N) ∃m ∈ F : ν(m) = 1

In this case ν is said to be a discrete valuation and further we obtaina subring R of F by letting R := a ∈ F | ν(a) ≥ 0 ≤r F . Alsoν : R→ N∪∞ is a discrete valuation on R. Finally F is the quotientfield of R, that is we get

F =ab−1 | a, b ∈ F, b 6= 0

(2.122) Example:

• We regard F = Q and any prime number p ∈ Z. Then we obtain adiscrete valuation νp : Q→ Z ∪ ∞ by (where a, b ∈ Z, b 6= 0)

νp

(ab

):= [a]p − [b]p

[a]p := sup k ∈ N | pk | a

Thereby νp is said to be the p-adic valuation of Q. And if we denotep := pZ, then the discrete valuation ring detemined by νp is just Zp.

• More generally the above construction works out for any PID R: letF be the quotient field of R and p ∈ R be any prime element. Thenwe may repeat the definition of νp literally (except for a, b ∈ R) andthereby obtain a discrete valuation νp : F → Z ∪ ∞. And thediscrete valuation ring determined by νp is Rp (where p := pR) again.

150 2 Commutative Rings

Page 151: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• The prototype of a discrete valuation ring is the ring of formal powerseries E[[t]] over a field E. And in this case we obtain a discrete valu-ation ν (note that ν = tE[[t]]), by

ν : E[[t]]→ Z ∪ ∞ : f 7→ sup k ∈ N | tk | f

(2.123) Theorem: (viz. 474)A commutative ring (R,+, ·) is said to be a discrete valuation ring (ab-breviated by DVR), iff it satisfies one of the following equivalent properties

(a) R admits a discrete valuation ν : R→ N ∪ ∞

(b) R is an integral domain and there is some element 0 6= m ∈ R satisfying

R \mR = R∗ and⋂k∈N

mkR = 0

(c) R is a PID and local ring but no field.

(d) R is an UFD but no field and any two prime elements of R are asso-ciates (i.e. if p, q ∈ R are irreducible, then we get pR = qR).

(e) R is a noetherian integral domain and local ring, whose maximal idealm is a non-zero principal ideal (i.e. m = mR for some 0 6= m ∈ R).

(f) R is a noetherian, normal and local ring with non-zero maximal idealm 6= 0 and the only prime ideals of R are 0 and m (i.e. specR = 0,m).

(2.124) Definition:A commutative ring (R,+, ·) is said to be a valuation ring, iff it is a non-zero integral domain in which the divides relation is a total order. Formallythat is, iff R satisfies the following three properties

(1) R 6= 0

(2) R is an integral domain

(3) ∀ a, b ∈ R we get a | b or b | a

(2.125) Remark:The last property (3) in the definition above might seem a little artificial.We hence wish to explain where it comes from: as R is an integral domainwe may regard its quotient field E := quotR. Then property (3) can bereformulated, as

∀x ∈ E \ 0 : x ∈ R or x−1 ∈ R

Thus if we are given a valuation (in the sense of fields) ν : E → Z ∪ 0 onE, we may regard the subring R := a ∈ E | ν(a) ≥ 0. And it is clear thatR satisfies the property x ∈ R or x−1 ∈ R for any 0 6= x ∈ E. Thus everyν is assigned a valuation ring R in E. Therefore valuation rings appearnaturally whenever we regard valuations on fields.

2.10 Local Rings 151

Page 152: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.126) Proposition: (viz. 476)

(i) If (R,+, ·) is a valuation ring, then R already is a local ring and Bezoutdomain (that is any finitely generated ideal already is principal).

(ii) If (R,+, ·) is a valuation ring, then we get the following equivalencies

(a) R is noetherian

(b) R is a PID

(c) R is a DVR or field

152 2 Commutative Rings

Page 153: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2.11 Dedekind Domains

(2.127) Definition:Let (R,+, ·) be an integral domain and let us denote its quotient field byF := quotR. Then a subset f ⊆ F is called a fraction ideal of R (whichwe abbreviate by writing f f R) iff it satisfies

(1) f ≤m F is an R-submodule of F

(2) ∃ 0 6= r ∈ R such that rf ⊆ R

(2.128) Remark:

• In particular any ideal a i R is a fraction ideal a f R (Prob justlet r = 1). By abuse of nomenclature ordinary ideals of R are hencealso said to be the integral ideals of F .

• Conversely for any fraction ideal f f R we see that f ∩ R i R isan ideal of R. Prob because 0 ∈ f ∩ R and if f , g ∈ f and a ∈ R thenclearly f + g ∈ f ∩R and af ∈ f ∩R as both, f and R are R-modules.

• And clearly for any x ∈ F the generated module f := xR is a fractionideal f f R of R Prob because if x = r/s then s f = rR ⊆ R.

• If R itself is a field, then F = R and the only possible fraction idealsof R are 0 and R (Prob if 0 6= x ∈ f f R then x has an inverse in Rand hence 1 = x−1x ∈ f. Thus for any a ∈ R we have a = a1 ∈ f).

(2.129) Proposition: (viz. 477)Let (R,+, ·) be an integral domain and let us denote its quotient field byF := quotR. As in the case of integral ideals, if f and g f R are fractionideals of R, then the sum f + g, intersection f ∩ g and product f g of finitelymany fraction ideals is a fraction ideal again. And if g 6= 0 we may alsodefine the quotient f : g of two fraction ideals. I.e. if f and g f R arefraction ideals of R then we obtain further fraction ideals of R by letting

f + g :=x+ y | x ∈ f, y ∈ g

f ∩ g :=

x | x ∈ f and x ∈ g

f : g :=

x ∈ F | xg ⊆ f

f g :=

n∑i=1

figi

∣∣∣∣∣ n ∈ N, fi ∈ f, gi ∈ g

(2.130) Remark:

• Let now 0 6= f and g f R be two non-zero fraction ideals of theintegral domain (R,+, ·) and let F = quotR. Then the product f gand quotient f : g 6= 0 are non-zero, too. Formally that is

f, g 6= 0 =⇒ f g 6= 0 and f : g 6= 0

2.11 Dedekind Domains 153

Page 154: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Prob as f and g 6= 0 we may choose some 0 6= f ∈ f and 0 6= g ∈ g.Then fg ∈ f g and fg 6= 0, as F is an integral domain (yielding f g 6= 0).Now let sg ⊆ R for some s 6= 0. Then (sf)g = f(sg) ⊆ fR ⊆ f andhence sf ∈ f : g. But as sf 6= 0 this means f : g 6= 0.

• For any fraction ideal f fR we always get the following two inclusions

f (R : f) ⊆ R ⊆ f : f

Prob f (R : f) contains elements of the form x1y1 + · · · + xnyn wherexi ∈ R : f and yi ∈ f. But by definition of R : f this means xiyi ∈ Rand hence x1y1 + · · ·+ xnyn ∈ R. And R ⊆ f : f is clear, as f ≤m F .

• First note that the multiplication f g of fraction ideals is both commu-tative and associative. Further it is clear that for any fraction idealf f R we get R f = f. That is the set of fraction ideals of R is a com-mutative monoid under the multiplication (defined in (2.129)) withneutral element R. This naturally leads to the definition of an invert-ible fraction ideal. Of course the non-zero fraction ideals als form amonoid and this is called the class monoid of R, denoted by

C(R) :=f f R | f 6= 0

(2.131) Definition: (viz. 477)Let (R,+, ·) be an integral domain and let us denote its quotient field byF := quotR. Then a fraction ideal f f R of R is said to be invertible,iff it satisfies one of the following two equivalent conditions

(a) ∃ g f R : f g = R

(b) f (R : f) = R

Note that in this case the fraction ideal g with f g is uniquely determined,to be g = R : f. And it is called the inverse of f, written as

f−1:= g = R : f

(2.132) Proposition: (viz. 477)Let (R,+, ·) be an integral domain and let us denote its quotient field byF := quotR. If now 0 6= a i R is a non-zero ideal of R, then the followingstatements are equivalent

(a) a is invertible (as a fraction ideal of R), i.e. there is some fraction idealg f R of R such that a g = R.

(b) There are some a1, . . . an ∈ a and x1, . . . , xn ∈ F such that for anyi ∈ 1 . . . n we get xia ⊆ R and a1x1 + · · ·+ anxn = 1.

(c) a is a projective R-module, i.e. there is a free R-module M and asubmodule P ≤m M such that a⊕ P = M .

154 2 Commutative Rings

Page 155: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2.133) Corollary: (viz. 479)Let (R,+, ·) be a local ring, that also is an integral domain. If now 0 6= a iRis a non-zero ideal of R then the following statements are equivalent

(a) a is a principal ideal (i.e. a = aR for some a ∈ R)

(b) a is invertible (as a fraction ideal a f R)

(2.134) Remark:

• If aR is any non-zero (a 6= 0) principal ideal in an integral domain(R,+, ·), then aR f R is an invertible fraction ideal of R. This isclear as its inverse obviously given by(

aR)−1

=1

aR

• Now consider an (integral) ideal a i R and elements a1, . . . , an ∈ aand x1, . . . , xn ∈ R : a ⊆ F such that a1x1 + · · ·+ anxn = 1. Then ais already generated by the ai, formally that is

a = 〈a1, . . . , an 〉i

Prob the inclusion ” ⊇ ” is clear, as ai ∈ a. For the converse inclusionwe consider any a ∈ a. As xi ∈ R : a we in particlar have xia ⊆ R.And because of a1x1 + · · · + anxn = 1 we are able to write a in theform a = (x1a)a1 + · · ·+ (xna)an ∈ 〈a1, . . . , an 〉i.

• We have seen in the proposition above, that an (integral) ideal a i Ris invertible if and only if there are a1, . . . , an ∈ a and x1, . . . , xn ∈ Fsuch that for any i ∈ 1 . . . n we get

xi a ∈ R and a1x1 + · · ·+ anxn = 1

Nota as we have just seen this already implies, that the ai generatea. Hence any invertible ideal already is finitely generated!

• Now consider some multiplicatively closed subset 0 6∈ U ⊆ R. ThenF = quotR = quotU−1R are canonically isomorphic, by (2.112.(iii)).If now f and g f R are any two fraction ideals of R then it is easy tosee that U−1f :=

a/ub | a/b ∈ f, u ∈ U

f U

−1R is a fraction idealof U−1R and a straightforward computation shows(

U−1f)(U−1g

)= U−1(f g)

In particular if f f R is invertible (over R) then U−1f is invertible(over U−1R) and vice versa. Formally that is the equivalence

f f R invertible ⇐⇒ U−1f f U−1R invertible

2.11 Dedekind Domains 155

Page 156: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Prob (U−1f) (U−1g) is generated by elements of the form (a/ub)(c/vd)where a/b ∈ f, c/d ∈ g and u, v ∈ U . Now uv ∈ U and (a/ub)(c/vd) =(ac)/(uvbd) such that this is contained in U−1(f g) again. This proves”⊆ ” conversely let a/b ∈ f, c/d ∈ g and u ∈ U . Then U−1(f g) isgenerated by elements of the form (ac)/(ubd). And as (ac)/(ubd) =(a/ub)(c/d) this is also contained in (U−1f) (U−1g). This proves theconverse inclusion ” ⊇ ” and hence the equality. And from this equalityit is clear that (U−1f)−1 = U−1(f−1

) for invertible fraction ideals (andhence the equivalence).

(2.135) Definition: (viz. 479)Let (R,+, ·) be an integral domain and let us denote its quotient field byF := quotR. Then R is said to be a Dedekind domain (shortly DKD)iff it satisfies one of the following equivalent statements

(a) every non-zero fraction ideal 0 6= f f R is invertible (i.e. there is somefraction ideal g f R such that f g = R).

(b) every non-zero (integral) ideal 0 6= a i R is invertible, (i.e. there issome fraction ideal g f R such that a g = R).

(c) every ideal a i R with a 6∈ 0, R allows a decomposition into finitelymany prime ideals, i.e. ∃ p1, . . . , pn ∈ specR such that a = p1 . . . pn.

(d) R is a notherian, normal ring in which every non-zero prime idealalready is maximal, that is specR = smaxR ∪ 0 .

(e) R is noetherian and for any non-zero prime ideal 0 6= p ∈ specR thelocalisation Rp is a discrete valuation ring (i.e. a local PID).

(2.136) Example:In particular any PID (R,+, ·) is a Dedekind domain - it is noetherian, asany ideal even has only one generator, it is normal (as it is an UFD by thefundamental theorem of arithmetic) and non-zero prime ideals are maximal(which can be seen using generators of the ideals).

(2.137) Theorem: (viz. 485)Let (R,+, ·) be a Dedekind domain and 0 6= a i R be a non-zero ideal ofR. Then R already satisfies an extensive list of further properties, namely

(i) Any ideal 0 6= a i R admits a decomposition into finitely many (upto permutations) uniquely determined prime ideals, formally

∃ ! p1, . . . , pn ∈ specR \ 0 such that a = p1 . . . pn

(ii) For any ideal 0 6= a i R the quotient ring R/a is a principal ring(i.e. a ring in which any ideal is generated by a single element).

(iii) Any ideal of R can be generated by two elements already, we even get:

∀ 0 6= a ∈ a ∃ b ∈ a such that a = aR+ bR

(iv) If R also is semilocal (i.e. R only has finitely many maximal ideals)then R already is a PID (i.e. every ideal is a principal ideal).

156 2 Commutative Rings

Page 157: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) If m i R is a maximal ideal of R and 1 ≤ k ∈ N then we obtain thefollowing isomorphy of rings (where as usual Rm = (R \ m)−1R andmkRm = (R \m)−1mk i Rm denote the localisation of R resp. mk)

R/mk ∼=r

Rm/mkRm

: a+ mk 7→ a

1+ mkRm

(2.138) Definition: (viz. 483)Let (R,+, ·) be a Dedekind domain, m ∈ smaxR be a maximal ideal and0 6= a i R be a non-zero ideal of R. Then we define the valuation νminduced by m on the set of ideals of R, by

νm : idealR→ N

: a 7→ max k ∈ N | a ⊆ mk : 0 7→ 0

And for any a ∈ R we denote νm(a) := νm(aR) ∈ N. Let now a, b i R bearbitrary ideals of R. The we say that b divides a, written as b | a, iff itsatisfies one of the following equivalent properties:

b | a :⇐⇒ ∃ c i R : a = b c

⇐⇒ a ⊆ b

⇐⇒ ∀m ∈ smaxR : νm(b) ≤ νm(a)

(2.139) Proposition: (viz. 483)Let (R,+, ·) be a Dedekind domain with quotient field F := quotR. Andlet m i R be any maximal (i.e. non-zero prime) ideal of R. Then we obtainthe following statements:

(i) ConsiderM = m1, . . . ,mn ⊆ smaxR a finite collection of maximalideals and k(1), . . . , k(n) ∈ N. Then for any m ∈ smaxR we obtain

νm

(mk(1)

1 . . .mk(n)n

)=

k(i) if m = mi

0 if m 6∈ M

(ii) Consider any two non-zero ideals 0 6= a, b i R then the multiplicityfunction νm acts additively under multiplication of ideals, that is

νm(a b) = νm(a) + νm(b)

(iii) Let 0 6= a, b i R be any two non-zero ideals of R and decompose a =

mk(1)1 . . .mk(n)

n and b = ml(1)1 . . .ml(n)

n with pairwise distinct maximalideals mi ∈ smaxR and k(i), l(i) ∈ N (note that this can always beestablished, as k(i) = 0 and l(i) = 0 is allowed). Then we get

a + b = mm(1)1 . . .mm(n)

n where m(i) := min k(i), l(i)

2.11 Dedekind Domains 157

Page 158: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) Consider any non-zero ideal 0 6= a i R, as before we decompose

a into a = mk(1)1 . . .mk(n)

n (with pairwise distinct mi ∈ smaxR and1 ≤ k(i) ∈ N). Then we obtain chinese remainder theorem:

R/a∼=r

n⊕i=1

R/mk(i)i

: b+ a 7→(b+ mk(i)

i

)

(v) Let a i R be any ideal in R, then there is some ideal b i R such thata + b = R are coprime and a b is principal (i.e. ∃ a ∈ R : a b = aR).

158 2 Commutative Rings

Page 159: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 3

Modules

3.1 Defining Modules

(3.1) Definition:Let (R,+, ·) be a ring, then the triple (M,+, ) is said to be an R-module,iff M 6= ∅ is a non-empty set and + resp. are binary operations of the form+ : M ×M →M : (x, y) 7→ x+ y and : R×M →M : (a, x) 7→ a x = axthat satisfy the following properties

(G) (M,+) is a commutative group, that is the addition is associative andcommutative, admits a neutral element (called zero-element, denotedby 0) and every element of M has an additive inverse. Formally

∀x, y, z ∈M : x+ (y + z) = (x+ y) + z

∀x, y ∈M : x+ y = y + x

∃ 0 ∈M ∀x ∈M : x+ 0 = x

∀x ∈M ∃x ∈M : x+ x = 0

(M) The addition + and scalar multiplication on M satisfy the followingcompatibility conditions regarding the operations of R

∀ a ∈ R ∀x, y ∈M : a (x+ y) = (a x) + (a y)

∀ a, b ∈ R ∀x ∈M : (a+ b) x = (a x) + (b x)

∀ a, b ∈ R ∀x ∈M : (a · b) x = a (b x)

∀x ∈M : 1 x = x

(3.2) Remark:

• The notations in the definition above deserve some special attention.First note that - as always with binary operations - we wrote x + yinstead of +(x, y). Likewise a x instead of (a, x) and of course a+ binstead of +(a, b), a · b instead of ·(a, b) (as we already did with rings).

• Recall that for a ring (R,+, ·) the operation + has been called additionand · has been called multiplication of R. In analogy, if (M,+, ) isan R-module, then + is said to be the addition of M and is calledthe scalar multiplication of M .

159

Page 160: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Next note that we have used two different binary operations +, namely+ : R × R → R of (R,+, ·) and + : M ×M → M of (M,+, ). Atfirst it may be misleading to denote two different functions by thesame symbol, but you will soon become used to it and appreciatethe simplicity of this notation. It will not lead to ambiguities, as theneighbouring elements (a and b for a + b, resp. x and y for x + y)determine which of the two functions is to be applied. As an excerciseit might be useful to rewrite the defining properties (M) using +R for+ in R and +M for + in M . As an example let us reformulate thefirst two properties in this way

∀ a ∈ R ∀x, y ∈M a (x+M y) = (a x) +M (a y)

∀ a, b ∈ R ∀x ∈M (a+R b) x = (a x) +M (b x)

• Note that the properties (G) of R-modules truly express that (M,+)is a commutative group. In particular we employ the usual notationalconventions for groups. E.g. we may omit the bracketing in sums dueto the associativity of the addition + of M

x+ y + z := (x+ y) + z

n∑i=1

xi :=(

(x1 + x2) + . . .)

+ xn

And as always the neutral element 0 ∈ M is uniquely determinedagain (if 0 and 0′ are neutral elements, then 0′ = 0 + 0′ = 0′ + 0 = 0).Likewise given x ∈M there is a uniquely determined x ∈M such thatx + x = 0 and this is said to be the negative element of x, denotedby −x := x (if p and q are negative elements of x then p = p + 0 =p+ (x+ q) = q + (x+ p) = q + 0 = q).

• Let (R,+, ·) be any ring (with unit element 1) and M be an R-module,then we wish to note some immediate consequences of the axiomsabove. Let x ∈M and recall that −x denotes the negative of x, then

0 x = 0

(−1) x = −x

Prob as 0 = 0+0 in R we get 0x = (0+0)x = (0x)+(0x) from (M).And substracting 0 x once from this equality we find 0 = 0 x. Thismay now be used to see 0 = 0x = (1+(−1))x = (1x)+((−1)x).Substracting x = 1 x we find −x = (−1) x.

• We have already simplified the notations for modules rigorously. How-ever it is still useful to introduce some further conventions. Recall thatfor rings (R,+, ·) we wrote ab instead of a · b. And we wrote a− b in-stead of a+ (−b) and ab+ c instead of (a · b) + c. The same is true forthe addition in the module M : we will write x−y instead of x+(−y).Likewise we write ax instead of ax and declare that the scalar multi-plication is of higher priority that the addition +. That is we writeax + y instead of (a x) + y. Note that it is unambiguously to writeabx instead of (a · b) x = a (b x), as this equality has been one

160 3 Modules

Page 161: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

of the axioms for R-modules. Thus we may also omit the brackets inmixed terms with multiplication and scalar multiplication.

• In contrast to rings (where we faithfully wrote (R,+, ·) on every firstoccurance of the ring) we will omit the operations + and whenrefering to the R-module (M,+, ). That is we will say: let M be anR-module and think of (M,+, ) instead.

• What we defined here is also called a left R-module, this is to saythat in the scalar multiplication : R × M → M the ring R actsfrom the left. One might be tempted to regard right R-modules aswell, i.e. structures (M,+,) in which the ring R acts from the right : M ×R→M : (x, a) 7→ x a that satisfy the analogous properites(x + y) a = (x a) + (y a), x (a + b) = (x a) + (x b), x (a · b) = (x a) b and x 1 = x. However this would only yield anentirely analogous theory: Recall the construction of the opposite ring(R,+, ·)op in section 1.4. If (M,+, ) is a (left) R-module, then weobtain a right Rop-module (M,+, )op by taking M as a set and + asan addition again, but using a scalar multiplication that reverses theorder of elements. That is we obtain a right Rop-module by

(M,+, )op := (M,+,) where x a := a x

• The theory of R-modules is particularly simple in the case that R isa field. As this case historically has been studied first there is somespecial nomenclature: suppose that V is an F -module, where (F,+, ·)is a field. Then we will use the following notions

set theory linear algebra

(F,+, ·) base field(V,+, ) vector space

elements of F scalarselements of V vectors

(3.3) Example:

• Consider any commutative group (G,+), then G becomes a Z-moduleunder the following scalar multiplication : Z×G→ G

k x :=

∑k

i=1 x for k > 00 for k = 0∑−k

i=1(−x) for k < 0

• Let (R,+, ·) be any ring and a ≤m R be a left-ideal of R (note thatin particular a = R is allowed). Then a is an R module under theoperations inherited from R, that is

+ : a× a→ a : (x, y) 7→ x+ y

: R× a→ a : (a, x) 7→ ax

3.1 Defining Modules 161

Page 162: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

That is: although we started with quite a lot of axioms for R-modules,examples are abundant - the theory of modules encompasses the theoryof rings! And the proofs are full of examples how module theory canbe put to good use in ring theory (e.g. the lemma of Nakayama).

• Consider any (i.e. not necessarily commutative) ring (S,+, ·) and sup-pose R ≤r S is a subring of S. If now M is an S-module, then Mbecomes an R-module under the operations inherited from S

+ : M ×M →M : (x, y) 7→ x+ y

: R×M →M : (a, x) 7→ ax

• Let (R,+, ·) and (S,+, ·) be any two (not necessarily commutative)rings and consider a ring-homomorphism ϕ : R → S. If now M is anS-module, then M also becomes an R-module under the operations

+ : M ×M →M : (x, y) 7→ x+ y

: R×M →M : (a, x) 7→ ϕ(a)x

• Consider an arbitrary (i.e. not necessarily commutative) ring (R,+, ·).If now 1 ≤ n ∈ N then Rn becomes a well defined R-module underthe pointwise operations + : Rn ×Rn → Rn and : R×Rn → Rn

(x1, . . . , xn) + (y1, . . . , yn) := (x1 + y1, . . . , xn + yn)

a (x1, . . . , xn) := (a · x1, . . . , a · xn)

Note that the operations + and · on the right hand side are those ofthe ring R, whereas the operations + and are thereby defined onthe module Rn. Further note that in the case n = 1 we get Rn = Rand the addition + on R1 by definition coincides with the addition +on R. This is precisely the reason why we did not need to distinguishbetween +R and +M - in case of doubt it doesn’t matter.

162 3 Modules

Page 163: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.4) Definition:Let (R,+, ·) be a ring, then the quadrupel (A,+, ·, ) is said to be an R-semi-algebra, iff A 6= ∅ is a non-empty set and +, · resp. are binaryopertations of the form + : A× A→ A, · : A× A→ A and : R × A→ Athat satisfy the following properties

(R) (A,+, ·) is a semi-ring, that is the addition + is associative and com-mutative, admits a neutral element (called zero-element, denoted by0), every element of A has an additive inverse, the multiplication · isassociative and satisfies the distributivity laws. Formally

∀ f, g, h ∈ A : f + (g + h) = (f + g) + h

∀ f, g ∈ A : f + g = g + f

∃ 0 ∈ A ∀ f ∈ A : f + 0 = f

∀ f ∈ A ∃ f ∈ A : f + f = 0

∀ f, g, h ∈ A : f · (g · h) = (f · g) · h∀ f, g, h ∈ A : f · (g + h) = (f · g) + (f · h)

∀ f, g, h ∈ A : (f + g) · h = (f · h) + (g · h)

(M) (A,+, ) is an R-module, that is the addition + and scalar multiplica-tion of A satisfy the following compatibility conditions

∀ a ∈ R ∀ f, g ∈ A : a (f + g) = (a f) + (a g)

∀ a, b ∈ R ∀ f ∈ A : (a+ b) f = (a f) + (b f)

∀ a, b ∈ R ∀ f ∈ A : (a · b) f = a (b f)

∀ f ∈ A : 1 f = f

(A) The multiplication · and scalar multiplication of A are compatiblein the following sense (note that this is some kind of associativity)

∀ a ∈ R ∀ f, g ∈ A : (a f) · g = a (f · g) = f · (a g)

If now (A,+, ·, ) is an R-algebra, then we transfer most of the notions ofring theory to the case of algebras, by refering to the semi-ring (A,+, ·). Tobe precise introduce the following notions

(A,+, ·, ) is called (A,+, ·) is a(n)R-algebra :⇐⇒ ring

commutative :⇐⇒ commutativeintegral :⇐⇒ integral domain

division algebra :⇐⇒ skew-fieldnoetherian :⇐⇒ commutative, noetherian ring

artinian :⇐⇒ commutative, artinian ringlocal :⇐⇒ commutative, local ring

semi-local :⇐⇒ commutative, semi-local ring

3.1 Defining Modules 163

Page 164: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.5) Remark:

• By definition any R-semi-algebra (A,+, ·, ) has a (uniquely deter-mined) neutral element of addition, denoted by 0 (that is f + 0 = ffor any f ∈ A). To be an R-algebra by definition means that the mul-tiplication also admits a neutral element. That is there is some 1 ∈ Asuch that for any f ∈ A we get f · 1 = f = 1 · f . And due to theassociativity of · this unit element 1 is uniquely determined again.

• If (A,+, ·, ) is an R-algebra, then by definition (A,+, ·) is a semi-ringand (A,+, ) is an R-module. In particular A inherits all the prop-erties of these objects and we also pick up all notational conventionsintroduced for these objects. That is we write 0 for the zero-elementand 1 for the unit element (supposed A is an R-algebra). Likewisewe write −f for the additive inverse of f (that is f + (−f) = 0 andf−1 for the multiplicative inverse of f (that is f · f−1 = 1 = f−1 · f),supposed such an element exists. In case you are not familiar withthese conventions we recommend having a look at section 1.3.

• If A is a commutative R-semialgebra, then condition (A) boils downto (a f) · g = a (f · g) for any a ∈ R and f , g ∈ A. Because if thisproperty is satisfied then we already get (af) ·g = f · (ag) from thecommutativity f ·(ag) = (ag) ·f = a(g ·f) = a(f ·g) = (af) ·g.

• If A is an R-algebra, i.e. an R-semialgebra containing a unit element 1,then due to property (A) the scalar multiplication can be expressedin terms of the ordinary multiplication of A with elements of the forma 1. To be precise, let a ∈ R and f ∈ A, then we clearly obtain

(a 1) · f = a f

• Note that property (A) in the definition of R-semi-algebras enables usto not only omit the multiplicative symbol · and scalar multiplication but also the bracketing. Whenever we have a term of the form afgthen we may use either bracketing (af)g = a(fg) due to (A).

• (♦) Let (R,+, ·) be a commutative ring and A be an R-semi-algebra,then we can embed A into an R-algebra A′ canonically: as a set wetake A′ := R×A and define the following operations on A′

(a, f) + (b, g) := (a+ b, f + g)

(a, f) · (b, g) := (ab, fg + ag + bf)

a (b, g) := (ab, af)

Then (A′,+, ·, ) truly becomes an R-algebra (the verification of this isleft to the interested reader) with zero element (0, 0) and unit element(1, 0). And of course we get a canonical embedding (i.e. an injectivehomomorphism of R-algebras), by virtue of

A → A′ : f 7→ (0, f)

164 3 Modules

Page 165: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.6) Example:

• Let (R,+, ·) be a commutative ring then, R becomes an R-module(even an R-algebra) canonically under its addition + as an addition ofvectors and its multiplication · as both, its multiplication and scalar-multiplication. I.e. if we let A := R, then R becomes an R-algebra(A,+, ·, ), under the operations

+ : A×A→ A : (f, g) 7→ f + g

· : A×A→ A : (f, g) 7→ fg

: R×A→ A : (a, f) 7→ af

In the following - whenever we say that we regard some ring (R,+, ·)as an R-module - we will refer to this construction. And in this caseany subset a ⊆ R is an ideal of R if and only if it is an R-submoduleof R (where R is regarded as an R-module, as we just did)

a i R ⇐⇒ a ≤m R

• Let (A,+, ·) be a commutative ring again and I 6= ∅ be any non-emptyset, then we take A := F(I,R) to be the set of functions from I to R.Note that this includes the case A = Rn by taking I := 1 . . . n. ThenA becomes an R-algebra (A,+, ·, ), under the pointwise operations

+ : A×A→ A : (f, g) 7→ (i 7→ f(i) + g(i))

· : A×A→ A : (f, g) 7→ (i 7→ f(i) · g(i))

: R×A→ A : (a, f) 7→ (i 7→ af(i))

• Likewise let (R,+, ·) be a commutative ring and a i R be an ideal ofR. Then we may consider the quotient ring A := R/a as an R-algebra(A,+, ·, ), under the following operations

+ : A×A→ A : (b+ a, c+ a) 7→ (b+ c) + a

· : A×A→ A : (b+ a, c+ a) 7→ (bc) + a

: R×A→ A : (a, b+ a) 7→ (ab) + a

• This is a special case of the following general concept: consider anytwo commutative rings (R,+, ·) and (A,+, ·) and regard an arbitraryring-homomorphism ϕ : R → A. Then A can be considered to be anR-algebra under its own addition + and its own multiplication · asmultiplication and scalar multiplication with the imported elementsϕ(a). That is A becomes an R-algebra under the operations

+ : A×A→ A : (f, g) 7→ f + g

· : A×A→ A : (f, g) 7→ fg

: R×A→ A : (a, f) 7→ ϕ(a)f

3.1 Defining Modules 165

Page 166: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• The most general case of the above is the following: let (R,+, ·) be any(that is even a non-commutative) ring. Then R still is an R-moduleunder the above perations. But R no longer needs to be R-algebra.Likewise consider a second ring (S,+, ·) and a ring-homomorphismϕ : R→ S. Then S is an R-module under the above operations.

+ : S × S → S : (x, y) 7→ x+ y

: R× S → S : (a, x) 7→ ϕ(a)y

Again S, need not be an R-algebra, in general. However S becomesan R-algebra (under its own multiplication), if the image of R underϕ is contained in the center of S, that is if

ϕ(R) ⊆ cen(S) := x ∈ S | ∀ y ∈ S : xy = yx

• An example of the above situation will be the ring of n × n matricesover a commutative ring (R,+, ·). Recall the construction of matricesgiven in section 1.4 (for more details confer to section ??) and letS := matnR. Then we obtain a homomorphism of rings by letting

ϕ : R→ S : a 7→

a 0. . .

0 a

And as R is commutative it is clear, that ϕ(R) ⊆ cen(S) (in fact iteven is true that ϕ(R) = cen(S)) and hence S becomes a well-definedR-algebra (under the usual multiplication and addition of matrices).

(3.7) Definition:Let (R,+, ·) be a ring and M be an R-module, then a subset P ⊆ M issaid to be an R-submodule of M , iff it satisfies the following properties

(1) 0 ∈ P

(2) x, y ∈ P =⇒ x+ y ∈ P

(3) a ∈ R, x ∈ P =⇒ ax ∈ P

And in this case we will write P ≤m M . Likewise if A is a R-semi-algebraand P ⊆ A is a subset, then P is said to be an R-sub-semi-algebra, iff itis both, a submodule and sub-semi-ring of A, formally iff it satisfies

(M) P ≤m A

(4) f , g ∈ P =⇒ fg ∈ P

And in this case we will write P ≤b A. If A even is an R-algebra (withunit element 1), then a subset P ⊆ A is said to be an R-subalgebra, iff itis both, a submodule and subring of A, formally iff it satisfies

(B) P ≤b A

(5) 1 ∈ P

166 3 Modules

Page 167: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And in this case we will write P ≤a A. Let now A be an R-semi-algebraagain. Then a subset a ⊆ A is said to be an R-algebra-ideal of A. iff it isboth, an ideal and a submodule of A, formally iff it satisfies

(M) a ≤m A

(I) f ∈ a, g ∈ A =⇒ fg ∈ a and gf ∈ a

And in this case we write a a A. Having defined all these notions the set ofall submodules of an R-module M is denoted by submM . Likewise subbAdenotes the set of all R-sub-semialgebras, aidealA denotes the set of allalgebra-ideals of a semi-algebra A. And finally the set of all R-subalgebrasof an R-algebra A id denoted by subaA. Formally we define

submM := P ⊆ M | P ≤m M subbA := P ⊆ A | P ≤b A subaA := P ⊆ A | P ≤a A

aidealA := a ⊆ A | P ≤b A

(3.8) Remark:

• If P ≤m M is a submodule (of the R-module M , where (R,+, ·) is anarbitrary ring), then P ≤g M already is a subgroup of (M,+), too.That is 0 ∈ P and for any x, y ∈ P we get −x ∈ P and x + y ∈ P .Thereby −x ∈ P follows from (3) by regarding a := −1, as in this casewe get −x = (−1) x ∈ P .

• Likewise if A is an R-algebra (over an arbitrary ring (R,+, ·)), thenwe only need to check properties (1), (2), (4) and (5), as (3) alreadyfollows from (4). Given f ∈ R and a ∈ A we get af = (a1A) f ∈ P .

• Let A be an R-semi-algebra over the ring (R,+, ·), then any sub-semi-algebra of A already is a a sub-semi-ring of A, formally

P ≤b A =⇒ P ≤s A

Prob properties (1), (2) and (4) of sub-semi-algebras and sub-semi-rings coincide. It only remains to check property (3) of sub-semi-rings:consider any f ∈ P , then −f = (−1)f ∈ P by (3) of sub-semi-algebras.

• Let A be an R-semi-algebra over the ring (R,+, ·), then a is an algebraideal of A iff it is an ideal and R-sub-semialgebra of A, formally

a a A ⇐⇒ a ≤b A and a i A

Prob the implication ⇐= is clear: (M) of algebra-ideals is identicalto (M) of sub-semi-algebras and (I) of algebra ideals just combines (4)and (5) of ideals. Concersely we first check property (4) of sub-semi-algebras: if f , g ∈ a, then in particular g ∈ A and hence fg ∈ a by (4)of algebra-ideals. Now properties (1) and (2) of ideals are containedin (M) of algebra ideals ((4) and (5) are (I) again). Thus it onlyremains to check property (3) of ideals. But if f ∈ a then, as before−f = (−1)f ∈ a by property (3) of algebra-ideals.

3.1 Defining Modules 167

Page 168: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• If A even is an R-algebra over the ring (R,+, ·), then there is nodifference between ideals and algebra-ideals of A, formally

a a A ⇐⇒ a i A

Prob we have already seen a a A =⇒ a i A above. Furtherwe have already argued that it only remains to check property (3) ofalgebra-ideals: thus consider any a ∈ R and f ∈ a, then a1 ∈ A (where1 ∈ A, as A is an R-algebra) and hence af = (a1)f ∈ a by (I).

(3.9) Proposition: (viz. 318)Fix any ring (R,+, ·) and for the moment being let ? appreviate any one ofthe words R-module, R-semi-algebra or R-algebra and let M be a ?. Furtherconsider an arbitrary family (where i ∈ I 6= ∅) Pi ⊆ M of sub-?s of M .Then the intersection of the Pi is a sub-? again⋂

i∈IPi ⊆ M is a sub-?

Likewise let A be an R-semi-algebra and let ai a A be an arbitrary (thatis i ∈ I 6= ∅ again) family of R-algebra-ideals of A, then the intersection ofthe ai is an R-algebra-ideal of A again⋂

i∈Iai a A is an R-algebra-ideal

(3.10) Definition:Fix any ring (R,+, ·) again, let ? appreviate any one of the words R-module,R-semi-algebra or R-algebra again and let M be a ?. If now X ⊆ M is anarbitrary subset then we define the ? generated by X to be the intersectionof all sub-?s containing X

〈X 〉m :=⋂P ⊆ M | X ⊆ P ≤m M

〈X 〉b :=⋂P ⊆ M | X ⊆ P ≤b M

〈X 〉a :=⋂P ⊆ M | X ⊆ P ≤a M

Nota in case there is any doubt concerning the module X is contained in,(e.g.X ⊆ M ⊆ N) we emphasise theR-moduleM used in this constructionby writing 〈X ⊆ M 〉m or 〈X 〉m ≤m M or even M〈X 〉m. Let us finallydefine the linear hull lh(X) resp. lhR(X) of X to be the following subsetof M . If X = ∅, then we let lh(∅) := 0 and if X 6= ∅ then

lh(X) =

n∑i=1

aixi

∣∣∣∣∣ 1 ≤ n ∈ N, ai ∈ R, xi ∈ X

168 3 Modules

Page 169: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.11) Proposition: (viz. 319)Fix any ring (R,+, ·) and consider anR-moduleM containing the non-emptysubset ∅ 6= X ⊆ M . Then the submodule generated by X is precisely thelinear hull of X in M , formally

〈X 〉m =

n∑i=1

aixi

∣∣∣∣∣ 1 ≤ n ∈ N, ai ∈ R, xi ∈ X

And if A is an R-semi-algebra, ∅ 6= X ⊆ A, then we can also describe theR-sub-semi-algebra generated by X explictly. Namely we get

〈X 〉b =

n∑i=1

ai

m∏j=1

xi,j

∣∣∣∣∣∣ 1 ≤ m,n ∈ N, ai ∈ R, xi,j ∈ X

Finally if A is an R-algebra, then the sub-algebra generated by X becomes

〈X 〉a = 〈X ∪ 1 〉b

(3.12) Proposition: (viz. 361)Let (R,+, ·) be an arbitrary ring and N be an R-module. Further considera cofinite submodule P ≤m N , that is there are some x1, . . . , xn ∈ N suchthat

P + lh x1, . . . , xn = N

If further P 6= N , then there is a maximal submodule M of N containingP . That is M ≤m N is a submodule satisfying the properties

(1) M 6= N

(2) P ⊆ M ⊆ N

(3) ∀Q ≤m N we get M ⊆ Q ⊆ N =⇒ M = Q or M = N

3.1 Defining Modules 169

Page 170: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.2 First Concepts

In this and the following section we will introduce three seperate concepts,that are all of fundamental importance. First we will introduce quotientmodules - we have already studied quotient rings in (1.57), now we willgeneralize this concept to modules. Secondly we will turn our attention totorsion and annihilation. This is an important concept when it comes toanalyzing the structure of a module, but will not occur in linear algebra.And finally we will introduce direct sums and products - the major tool forcomposing and decomposing modules over a common base ring.

(3.13) Definition: (viz. 324)Let (R,+, ·) be an arbitrary ring, consider an R-module M an R-submoduleP ≤m M of M , then P induces an equivalence relation on M , by

x ∼ y :⇐⇒ y − x ∈ P

where x, y ∈ M . The equivalency classes under this relation are calledcosets of P and for any x ∈M this is given to be

x+ P := [x] = x+ p | p ∈ P

Finally we denote the quotient set of M modulo P (meaning the relation ∼)

M/P := M/

∼ = x+ P | x ∈M

And thereby M/P can be turned into an R-module (M/P,+, ) again under

(x+ P ) + (y + P ) := (x+ y) + P

a (x+ P ) := (ax) + P

Thereby (M/P,+, ) is also called the quotient module or residue mod-ule of M modulo P . Now consider an R-semi-algebra A and an R-algebra-ideal a a A. Then A/a (in the sense above) not only is an R-module, buteven becomes an R-semi-algebra (A/a,+, ·, ), under the multiplication

(f + a)(g + a) := (fg) + a

Thereby (A/a,+, ·, ) is also called the quotient algebra or residue al-gebra of A modulo a. And if A even is an R-algebra, then (A/a,+, ·, ) isan R-algebra again, its unit element being given to be 1 + a.

170 3 Modules

Page 171: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.14) Proposition: (viz. 327) Correspondence TheoremLet (R,+, ·) be an arbitrary ring, M be an R-module and L ≤m M be anyR-submodule of M . Then we obtain the following correspondence

(i) Consider some R-submodule P ≤m M such that L ⊆ P , then weobtain an R-submodule P/L of the quotient module M/L by virtue of

P/L := p+ L | p ∈ P ≤m

M/L

and thereby for any element x ∈M we obtain the following equivalency

x+ L ∈ P/L ⇐⇒ x ∈ P

(ii) Now the R-submodules of M/L correspond to the R-submodules ofM containing L and this correspondence can be explictly given to be

subm(M/

L

)←→ P ∈ submM | L ⊆ P

U 7→ x ∈M | x+ L ∈ U P/L ←[ P

(iii) Consider an arbitrary family Pi ≤m M (where i ∈ I) of R-submodulesof M such that for any i ∈ I we get L ⊆ Pi. Then the intersectioncommutes with taking to quotients, that is we obtain⋂

i∈I

(Pi/L

)=

(⋂i∈I Pi

)/L

(iv) As in (iii) consider any Pi ≤m M where i ∈ I and L ⊆ Pi, then thesummation also commutes with taking to quotients, that is∑

i∈I

(Pi/L

)=

(∑i∈I Pi

)/L

(v) More generally consider any family of sumbodules Pi ≤m M (wherei ∈ I) such that L ⊆

∑i Pi, then we obtain the following identity∑

i∈I

(Pi + L/

L

)=

(∑i∈I Pi

)/L

(vi) Finally for any subset X ⊆ M let us denote X/L := x+ L | x ∈ X that is X/L ⊆ M/L, then we obtain the following identity

〈X/L 〉m = 〈X 〉m + L/

L

3.2 First Concepts 171

Page 172: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.15) Definition:Let (R,+, ·) be any ring and M be an R-module. Further consider anelement x ∈M and an arbitrary subset X ⊆ M , then we introduce

(i) We define the annihilator of x resp. of X to be the following subsets

ann(x) := a ∈ R | ax = 0 ann(X) := a ∈ R | ∀x ∈ X : ax = 0

=⋂x∈X

ann(x)

If we want to stress the base ring in which the annulator is generated,we also write annR(X) instead of ann(x). It is easy to see that anyannihilator ann(x) is a left-ideal in R. Finally the set of all all left-ideals of R occuring as annihilators of non-zero elements of M is calledthe annihilator spectrum of M

sannM := ann(x) | 0 6= x ∈M

(ii) For X ⊆ 0 we let zdR(X) := 0 and if there is some x ∈ X withx 6= 0 then we define the set of zero-divisors of X to be

zdR(X) := a ∈ R | ∃ 0 6= x ∈ X : ax = 0 =

⋃06=x∈X

ann(x)

(iii) The complement of the zero-divisors is called the set of non-zero-divisors of X, written as nzdR(X) := R \ zdR(X). And if there issome x ∈ X with x 6= 0 then this is given to be

nzdR(X) = a ∈ R | ∀ 0 6= x ∈ X : ax 6= 0

(iv) We thereby define the torsion submodule of M to be the collectionof elements of M that have an non-trivial annihilator, formally that is

tor(M) := x ∈M | ∃ 0 6= a ∈ R : ax = 0 = x ∈M | ann(x) 6= 0

Now M is said to be a torsion module iff tor(M) = M and M issaid to be faithful or torsion-free iff tor(M) = 0 .

(v) A submodule P ≤m M is said to be a pure submodule of M , iff itsatisfies one of the following two equivalent conditions

(a) tor(M/P ) = 0 + P (b) for any a ∈ R with a 6= 0 and any x ∈M we get the implication

ax ∈ P =⇒ x ∈ P

172 3 Modules

Page 173: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Prob for any x ∈ M we have x+ P ∈ tor(M/P ) if and only if thereis some a ∈ R with a 6= 0 such that a(x+ P ) = 0 + P and this againis ax ∈ P . Hence M/P is torsion-free if and only if ax ∈ P impliesx+ P = 0 + P , which in turn is x ∈ P .

(3.16) Proposition: (viz. 328)Let (R,+, ·) be any ring, M be an R-module and X ⊆ M be an arbitrarysubset of M . Then we obtain the following statements

(i) The annihilator of an arbitrary setX is a left-ideal ofR, the annihilatorof an R-module M even is an ideal of R, formally that is

ann(X) ≤m R

ann(M) i R

(ii) (♦) Let x ∈M be any element of the R-module M and recall that theR-submodule ofM generated by x is given to beRx = ax | a ∈ R .Then we obtain the following isomorphy of R-modules

R/ann(x)

∼=m Rx : b+ ann(x) 7→ bx

(iii) If a i R is an ideal contained in the annihilator ideal a ⊆ annR(M)of M , then M can be turned into a (R/a)-module using the followingscalar-multiplication (where a ∈ R and x ∈M)

(a+ a)x := ax

(iv) Let R 6= 0 be a non-zero integral domain, then the torsion submoduletor(M) truly is an R-submodule of M , formally that is

tor(M) ≤m M

and the quotient module ofM modulo tor(M) is torsion-free, formally

tor(M/

tor(M)

)= 0

(v) If R 6= 0 is not the zero-ring, then the following statements are equiv-alent (recall that in this case M was said to be torsion-free)

(a) tor(M) = 0

(b) zdR(M) ⊆ 0

(c) ∀ a ∈ R, ∀x ∈M we get ax = 0 =⇒ a = 0 or x = 0

(vi) Suppose R is a commutative ring, then the annihilator of X equals theannihilator of the R-submodule generated by X, formally that is

ann(X) = ann(〈X 〉m

)3.2 First Concepts 173

Page 174: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Consequently if xi | i ∈ I ⊆ M is a set of generators of the R-module M = lh xi | i ∈ I then the annihilator of M is given to be

ann(M) =⋂i∈I

ann(xi)

(♦) And the R-module homorphism ϕ : R → M I defined by ϕ(a) :=(axi) has the kernel kn(ϕ) = ann(M). In particular we obtain anembedding (i.e. an injective R-module homorphism)

R/ann(M) → M I : a+ ann(M) 7→ (axi)

(vii) Let (S,+, ·) be a skew-field and M be an S-module, if now x ∈ M isany element of M , then the annihilator of x 6= 0 is 0. In particular Mis torsion free and if x 6= 0 then S ∼→Sx : a 7→ ax.

ann(x) =

S if x = 00 if x 6= 0

(viii) Let (R,+, ·) be a commutative ring, then maximal, proper annihilatorideals of (the R-module) M are prime ideals of R, formally that is(

sannM \ R )∗

⊆ specR

(ix) (♦) If (R,+, ·) is a commutative ring and M is a simple, non-zeroR-module, then the annihilator of M is a maximal ideal of R, formally

ann(M) ∈ smaxR

(x) If M 6= 0 is a finitely generated, non-zero, torsion-free tor(M) = 0R-module, then M contains a maximal pure submodule, formally

P ≤m M | P 6= M,P is pure ∗ 6= ∅

(3.17) Proposition: (viz. 331) Modular RuleLet (R,+, ·) be an arbitrary ring, M be an R module and P , Q and U ≤m Mbe R-submodules of M such that P ⊆ Q. Then we obtain(

Q ∩ U)

+ P = Q ∩(P + U

)P ∩ U = Q ∩ U, P + U = Q+ U =⇒ P = Q

174 3 Modules

Page 175: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.3 Direct Sums

(3.18) Definition:Let (R,+, ·) be an arbitrary ring and consider an R-module M . If we aregiven finitely many submodules P1, . . . , Pn ≤m M of M , then we definetheir sum to be set of all sums of elements of the Pi

P1 + · · ·+ Pn := x1 + · · ·+ xn | xi ∈ Pi

If we are given an arbitrary collection of R-submodules Pi ≤m M , wherei ∈ I is any index set, then we generalize the finite case directly by definingthe sum of the Pi to be the set of all finite sums within the Pi∑

i∈IPi :=

x1 + · · ·+ xn

∣∣ n ∈ N, i1, . . . , in ⊆ I, xk ∈ Pi(k)

By definition this set consists of finite sums of elements x1 + · · ·+ xn wherexk ∈ Pi(k) for some i(k) ∈ I. Thus if we denote the collection of finitesubsets of I by I := Ω ⊆ I | #Ω <∞ then we may present a slightlymore formal description of this set

∑i∈I

Pi =

∑i∈Ω

xi

∣∣∣∣∣ Ω ∈ I, xi ∈ Pi

(3.19) Remark:

(i) Suppose we are given a collection of elements (xi) ⊆ M such that forany i ∈ I we have xi ∈ Pi. More importantly suppose that among thesexi only finitely many are non-zero, that is the set Ω := i ∈ I | xi 6= 0 is finite. As in (1.25) we then denote∑

i∈Ixi :=

∑i∈Ω

xi

That is: if only finitely many of the xi are non-zero, then there is awell-defined notion of the sum of these elements - just leave out thezeros and sum up these (finitely many) non-zero elements.

(ii) It is clear that P1+· · ·+Pn and - more generally -∑

i Pi are submodulesof M . Prob Given any x =

∑i xi and y =

∑i yi ∈

∑i Pi their sum is

x+ y =∑

i(xi + yi) and as Pi is a submodule this means xi + yi ∈ Piand hence x + y ∈

∑i Pi again. Likewiese for any a ∈ R we have

ax =∑

i(axi) ∈∑

i Pi, as also axi ∈ Pi.

(iii) In fact, in the light of proposition (3.11) this is just the R-submoduleof M generated by the Pi, i.e. P1 + · · ·+Pn = 〈P1∪· · ·∪Pn 〉m ≤m M .More generally: given an arbitrary collection of R-submodules Pi ≤m

M , where i ∈ I is any index set, then the sum of the Pi is just thesubmodule generated by the union of all the Pi∑

i∈IPi = 〈

⋃i∈I

Pi 〉m

3.3 Direct Sums 175

Page 176: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) In particular, if x ∈ M is any single element, then the submodule ofM generated by x is 〈x 〉m = lhR(x) = Rx := ax | a ∈ R . Hencefor any x1, . . . , xn ∈M the R-linear hull of these elements is just

lhR(x1, . . . , nn ) = Rx1 + · · ·+Rxn

More generally, as both, the R-linear hull and the sum of submodulesis based on finite sums only it is clear that for an arbitrary collectionof elements (xi) ⊆ M the liear hull of the xi is just the the sum ofthe submodules generated by the xi, formally that is

lhR(xi | i ∈ I ) =∑i∈I

Rxi

(v) If we are given a chain (Pi) of submodules, that is Pi ≤m M is asubmodule for any i ∈ I such that for any i, j ∈ I we have Pi ⊆ Pjor Pj ∈ Pi, then even the union of the Pi itself (not only their sum) isa submodule of M ⋃

i∈IPi ≤m M

Prob consider a ∈ R and x, y ∈ P where we abbreviate P =⋃i Pi.

That is x ∈ Pi and y ∈ Pj for some i resp. j ∈ I. Without loss ofgenerality we may assume Pi ⊆ Pj and that is x ∈ Pj as well. Thenas Pi and Pj ≤m M are submodules again we clearly have ax ∈ Piand x+ y ∈ Pj such that ax and x+ y ∈ P again. That is P truly isa submodule of M again.

(3.20) Example:Sadly the summation and intersection of modules do not fit together nicely.This is to say: if we are given submodules P and Qi ≤m M of an R-moduleM , then in general we only get

P +⋂i∈I

Qi ⊆⋂i∈I

(P +Qi)

Yet the equality of these sets need not hold true. As an example we regardthe ring R := Z and pick up the Z-module M := Z again. If now P := 2Zand for any i ∈ I = i ∈ N | i ≥ 3, i prime we let Qi := iZ, then it is clear,that P + Qi = M for any i ∈ I [as 2Z + iZ = gcd(2, i)Z = Z due to thelemma (2.68) of Bezout].

However if x ∈ Qi for any i ∈ I, then i | x, such that x would haveinfinitely many prime divisors. This forces x = 0 and hence the intersection⋂iQi = 0. Therefore we only have

P +⋂i∈I

Qi = 2Z ⊂ Z =⋂i∈I

(P +Qi)

176 3 Modules

Page 177: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.21) Definition: (viz. 348)

• Let (R,+, ·) be any ring, I 6= ∅ be an arbitrary index set and for anyi ∈ I let Mi (more explictly (Mi,+, )) be an R-module. Then weregard the Carthesian product of the Mi

M :=∏i∈I

Mi

This can be turned into another R-module - called the (exterior) di-rect product of the Mi - by installing the following (pointwise) alge-braic operatons (where x = (xi), y = (yi) ∈M and i runs in i ∈ I)

+ : M ×M →M : (x, y) 7→ (xi + yi)

: R×M →M : (a, x) 7→ (axi)

Note that xi + yi is the addition of vectors in Mi and axi is the scalarmultiplication in Mi and hence these operations are well-defined. It iseasy to see that M thereby inherits the properties of an R-module.

• We have just introduced the direct product M of the Mi as an R-module. Let us now define a certain R-submodule of M , called the(exterior) direct sum of the Mi. To do this let us first denote thesupport of x = (xi) ∈M to be supp(x) := i ∈ I | xi 6= 0 ∈Mi ⊆ I,i.e. the set of indices i with non-vanishing xi. Then we let

⊕i∈I

Mi :=

x ∈

∏i∈I

Mi

∣∣∣∣∣ #supp(x) <∞

That is⊕

iMi consists of all the x = (xi) ∈∏iMi that contain finitely

many non-zero coefficients xi 6= 0 only. In particular the direct sumcoincides with the direct product if and only if I is finite⊕

i∈IMi =

∏i∈I

Mi ⇐⇒ #I <∞

And⊕

iMi becomes an R-module again under the same algerbaicoperations that we have introduced for the direct product

∏iMi. For-

mally that is: the direct sum is an R-submodule of the direct product⊕i∈I

Mi ≤m

∏i∈I

Mi

• (♦) Let now N be another R-module and for any i ∈ I consider an R-module homomorphism ϕi : Mi → N . Then we obtain a well-definedR-module-homomorphism from the direct sum to N by virtue of⊕

i∈Iϕi :

⊕i∈I

Mi → N : (xi) 7→∑i∈I

ϕi(xi)

3.3 Direct Sums 177

Page 178: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

This is well-defined, as only finitely many xi ∈ Mi are non-zero (as(xi) is contained in the direct sum of the Mi) and hence the sum onlycontains finitely many non-zero summands. And the properties of anR-module-homomorphism are trivially inherited from the ϕi.

• Now consider any R-module M and for any i ∈ I an R-submodulePi ≤m M . And for any j ∈ I let us denote by Pj the R-submodulegenerated by all Pi except Pj , formally that is

Pj :=∑i 6=j

Pi = 〈⋃

i∈I\j

Pi 〉m

Then M is said to be the (inner) direct sum of the Pi iff the Pisatisfy one of the following two equivalent statements concerning M :

(a) The Pi build up M but for any index j ∈ I the intersection of Pjand its complement Pj is trivial, formally that is

M =∑i∈I

Pi and ∀ j ∈ I : Pj ∩ Pj = 0

(b) Any x ∈M has a uniquely determined representation as a (finite)sum of elements of the Pi, formally that is

∀x ∈M ∃ ! (xi) ∈⊕i∈I

Pi : x =∑i∈I

xi

And in case that M is the inner direct sum ot the Pi we will write(beware the similarity to the exterior direct sum above)

M =⊕i∈I

Pi

• Let M be any R-module and P ≤m M and R-submodule again. ThenP is said to be complemented or a direct summand of M , iff thereis another submodule P of M , such that M is the inner direct sum ofP and P . Formally that is: ∃ P ≤m M such that

(1) M = P + P

(2) P ∩ P = 0

(3.22) Remark: (viz. 350)Recall that in Carthesian products (and the direct product has been definedas such) for any j ∈ I there is a canonical projection πj of the product

∏iMi

onto Mj . Thereby the projection πj is defined, to be

πj :∏i∈I

Mi →Mj : (xi) 7→ xj

178 3 Modules

Page 179: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

But in contrast to arbitrary Carthesisan products we also find (for any j ∈ I)a canonical embedding ιj into the direct sum (and product), by letting

ιj : Mj →⊕i∈I

Mi : xj 7→ (δi,jxj)

where δi,jxj := xj for i = j and δi,jxj := 0 ∈ Mi for i 6= j. Then it is clearthat πjιj = 11 is the identity on Mj and in particular πj is surjective andιj is injective. Further these maps are tightly interwoven with the structureof the direct product and sum respectively. To be precise, consider anyx = (xi) ∈

∏iMi then we get

x =(πi(x)

)On the other hand for any x = (xi) ∈

⊕iMi we obtain another identity

(note that the sum thereby is well-defined, as only finitely many xi arenon-zero, such that in truth the sum is a finite only)

x =∑i∈I

ιi(xi)

(♦) By definition of the operations on the direct product/sum it is also clearthat both projection and embedding are homomorphisms of R-modules.

(3.23) Example:Consider any ring (R,+, ·), the R-module M := R2 and the submodulesP1 := R(1, 0) = (a, 0) | a ∈ R and P2 := R(0, 1) = (0, b) | b ∈ R . Thenclearly M is the inner direct sum of P1 and P2

M = P1 ⊕ P2

To check wether M can be decomposed as a direct sum of some Pi it doesnot suffice to verify, that the intersection of the Pi is pairwise trivial. Inthe example above let P3 := R(1, 1) = (a, a) | a ∈ R . Then it is clearthat M = P1 + P2 + P3 and Pi ∩ Pj = 0 for any i 6= j ∈ 1 . . . 3, but M isnot the inner direct sum of P1, P2 and P3. In fact x = (1, 1) ∈ M has anon-unique representation as x = (1, 0)+(0, 1) ∈ P1+P2 and x = (1, 1) ∈ P3.

(3.24) Example:The most common example of a direct sum is a simple multiple of somemodule: to be precise, let (R,+, ·) be some ring and consider an R-moduleM . Then for any non-empty set I we denote the product (resp. sum) of acopy of M for any i ∈ I by

M I :=∏i∈I

M = (xi) = x : I →M : i 7→ xi

M⊕I :=⊕i∈I

M =

(xi) ∈M I | i ∈ I | xi 6= 0 is finite

3.3 Direct Sums 179

Page 180: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

The most important example is the case M = R, i.e. R⊕I consists of Iidentical copies of R. By definition the operations on M I (and M⊕I) arecomponent-wise, that is for any (xi), (yi) ∈M I and any a ∈ R we have

(xi) + (yi) := (xi + yi)

a(xi) := (axi)

A special case of this is Mn - here we take n identical copies of M , where1 ≤ n ∈ N is any natural number. In the formal terms above this can beput as M I , where the set is I = 1 . . . n. As I is finite there is no differencebetween the direct sum and product

Mn := M1...n =

n∏i=1

M =

n⊕i=1

M

(3.25) Remark:There is a reason why we didn’t bother to strictly seperate exterior andinner direct products. The reason simply is: there is not much of a dif-ference, any inner direct product already is an exterior direct product (upto isomorphy) and any exterior direct product can be reformulated as aninterior direct product. We will give a precise formulation of this fact inthe proposition below. Thus the distinction between inner and exterior is apurely set-theoretic one, it doesn’t affect the algebra. This is why we do notbother that our formalism doesn’t really distinguish between the interiorand exterior direct sums - the difference is no structural one. (In generalgroup theory there will be a difference).

(3.26) Proposition: (viz. 350)

(i) (♦) Let (R,+, ·) be any ring, M be an R-module and Pi ≤m M bean arbitrary (i ∈ I) collection of R-submodules of M . If now M is theinner direct sum M =

⊕i Pi of the Pi, then M is already isomorphic

(as an R-module) to the exterior direct sum of the Pi under⊕i∈I

Pi ∼=m M : (xi) 7→∑i∈I

xi

(ii) Let (R,+, ·) be any ring and Mi an arbitrary (i ∈ I) collection of R-modules. Now let Pi := ιi(Mi) ≤m

⊕iMi, then the exterior direct

sum of the Mi is just the interior direct sum of the Pi put formally⊕i∈I

Mi =⊕i∈I

Pi

180 3 Modules

Page 181: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.27) Proposition: (viz. 350) (♦)Let (R,+, ·) be an arbitrary ring and let I and J be arbitrary index sets.For any i ∈ I resp. j ∈ J let Mi and Nj be R-modules. Then we obtain thefollowing isomorphy of R-modules

mhom

⊕i∈I

Mi,∏j∈J

Nj

∼=m

∏(i,j)∈I×J

mhom(Mi, Nj)

We can even give the above isomorphism explictly: let us denote the directsum of the Mi by M :=

⊕iMi and the direct product of the Nj by N :=∏

j Nj . Further denote the canonical embedding ιi : Mi → M and thecanonical projection πj : N → Nj . Then the isomorphy explictly reads as

mhom(M,N) ∼=m

∏(i,j)∈I×J

mhom(Mi, Nj)

ϕ 7→(πjϕιi

)(⊕i∈I

ϕi,j

)←[

(ϕi,j

)Let us explain this compact notation a bit: for any (I × J)-tupel of homo-morphisms ϕi,j : Mi → Nj the corresponding homomorphism ϕ := (

⊕i ϕi,j)

is explictly given to be the following: ϕ : M → N : (xi)→ (∑

i ϕi,j(xi)).

(3.28) Remark:Suppose we even have (ϕi,j) ∈

∏i

⊕j mhom(Mi, Nj), that is for any i ∈ I

the set j ∈ J | ϕi,j 6= 0 is finite. Then the induced map ϕ even lies in(⊕

i ϕi,j) ∈ mhom(⊕

iMi,⊕

j Nj). The converse need not be true however,

as an example regard I = 0, J = N, M = R⊕N and Nj = R. As ahomomorphism ϕ we choose the identity ϕ = 11 : R⊕N → R⊕N. Then ϕinduces the tuple 11 7→ (πj), which has no finite support.

3.3 Direct Sums 181

Page 182: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.4 Ideal-induced Modules

In this short section we will introduce the concept of the submodule aMgenerated by an ideal a of R. This is a way of passing on from a module Mover a ring R to a related module over the quotient ring R/a. So as alwayswith going to quotients, one tries to single out unnecessary complications,solve the problem in the reduced case and later lift it up again, solving theoriginal problem. This is particularly useful in case the quotient ring R/mis a field, i.e. m is a maximal ideal.

While the construction is elementary it yields some very useful results - pri-marily the lemma of Nakayama. Though these results are of little value intheir own right, they are powerful tools in providing proofs of more substan-tial theorems. Hence one might as well skip this section and postpone it,until one encounters the urge to pass on using a quotient construction.

(3.29) Definition:Consider an arbitrary ring (R,+, ·), an ideal a i R and an R-module M .Let further P ≤m M be a submodule of M , then we denote the submoduleof M generated by elements of the form ap, where a ∈ a and p ∈ P , by

aP := 〈 ap | a ∈ a, p ∈ P 〉m

(3.30) Proposition: (viz. 332)Let (R,+, ·) be an arbitrary ring, a i R an ideal and consider a submoduleP ≤m M of the R-module M . Then we get M

(i) The submodule aP of M can be given explictly to be the following set

aP =

n∑i=1

aipi

∣∣∣∣∣ 1 ≤ n ∈ N, ai ∈ a, pi ∈ P

(ii) If P is generated by pi | i ∈ I ⊆ P , then for any x ∈ aP there aresome a1, . . . , an ∈ a and p1, . . . , pn ∈ pi | i ∈ I such that

x = a1p1 + · · ·+ anpn

(iii) It is clear that aP is a submodule of P and hence we obtain thefollowing chain of inclusions

aP ≤m P ≤m M

(iv) The assignment P 7→ aP is order preserving, that is given any twosubmodules P and Q ≤m M we obtain the implication

P ⊆ Q =⇒ aP ⊆ aQ

182 3 Modules

Page 183: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) Going to residues modulo P is compatible with inducing modules byideals, that is given P ≤m M and a i R we get the equality

a(M/

P

)= P + aM/

P

(vi) The quotient module M/aM becomes an (R/a)-module under the fol-lowing operations: for any a ∈ R and x, y ∈M we let

(x+ aM) + (y + aM) := (x+ y) + aM

(a+ a)(x+ aM) := (x+ y) + aM

(vii) (♦) If M and N are R-modules and ϕ : M → N is an R-module-homomorphism, then we obtain a well-defined (R/a)-module-homo-morphism, by letting

ϕ/a : M

/aM →

N/aN : x+ aM 7→ ϕ(x) + aN

If ϕ is surjective or bijective, then so is the induced homomorphismϕ/a. That is if ? abbreviates any of the words surjective or bijectivethen we obtain the implication

ϕ is ? =⇒ ϕ/a is ?

Nota the injectivity of ϕ need not be preserved. E.g. consider R = Z

and a = 3Z. Then Z → Q : k 7→ k clearly is injective, but aQ = Q

and hence the induced map Z/aZ→ Q/aQ cannot be injective.

(viii) Thereby the linear hull of any subset Z ⊆ M/aM of the quotientmodule does not depend on whether we take R or R/a as a base ring

lhR(Z) = lhR/a(Z)

(ix) Hence a subset Q ⊆ M/aM of the quotient module is an R-submoduleof M/aM if and only if it is an (R/a)-submodule of M/aM . Thereforethe (R/a)-submodules of M/aM are given to be

submR/a

(M/

aM

)=

P/

aM

∣∣∣ P ≤m M, aM ⊆ P

(3.31) Example: (viz. 371)Let (R,+, ·) be a commutative ring and fix any ideal a i R. Then for anynon-empty set I, the induced submodule aR⊕I is nothing but a⊕I

aR⊕I = a⊕I

And thereby we obtain an isomorphy of the (R/a)-modules R⊕I/aR⊕I and(R/a)⊕I which can be given explictly, to be the following(

R/a

)⊕I ∼→ R⊕I/aR⊕I : (bi + a) 7→ (bi) + aR⊕I

3.4 Ideal-induced Modules 183

Page 184: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.32) Lemma: (viz. 397) (♦)Let (R,+, ·) be a commutative ring, a i R be an ideal and consider anR-module M . Then the following statements are true:

(i) If M is a free R-module, with basis mi | i ∈ I , then the quotientmodule M/aM is a free (R/a)-module with basis

mi + aM | i ∈ I, mi 6∈ aM

And thereby the elements of this basis are pairwise distinct. To beprecise: if the mi are pairwise distinct (i.e. for any j, j ∈ I we getmi = mj =⇒ i = j) then the mi + aM are pairwise distinct too, thatis: for any i, j ∈ I with mi and mj 6∈ aM we get

mi + aM = mj + aM =⇒ i = j

(ii) Lemma of Nakayama:Given any ideal a of R the following two statements are equivalent:

(a) a ⊆ jacR

(b) For any finitely generated R-module M we get the implication

aM = M =⇒ M = 0

(iii) If P ≤m M is a submodule of M , such that the quotient M/P isfinitely generated, and a ⊆ jacR, then we obtain the implication:

M = P + aM =⇒ P = M

(iv) IfN is anotherR-module, and ϕ : M → N is anR-module-homomorphism,such that N/imϕ is finitely generated. If again a ⊆ jacR, then weget the following implication

ϕ is surjective ⇐⇒ ϕ/a is surjective

(v) If M is finitely generated, ϕ : M →M even is an R-module-endomor-phism, and again a ⊆ jacR, then we get the following implication

ϕ is bijective ⇐⇒ ϕ/a is bijective

(vi) Lemma of Dedekind:If M is a finitely generated R-module such that aM = M , then thereis some element a ∈ a of a satisfying

(1− a)M = 0

184 3 Modules

Page 185: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.33) Proposition: (viz. 402)Let (R,+, ·) be a commutative ring and consider an R-modlue M . Then weobtain all of the following statements

(i) Suppose thatM is finitely generated, that is there are some x1, . . . , xn ∈M such that M = lhR x1, . . . , xn . Then we get the implication

∀m : M = mM =⇒ M = 0

(ii) Suppose P ≤m M is a cofinite module, that is there are somex1, . . . , xn ∈ M such that M = P + lhR x1, . . . , xn . Then we getthe implication

∀m : M = P + mM =⇒ P = M

(iii) IfN is anotherR-module and ϕ : M → N is anR-module-homomorphismsuch that imϕ ≤m N is cofinite. Then we get the equivalency

ϕ is surjective ⇐⇒ ∀m ∈ smaxR : ϕ/m is surjective

(iv) IfM is finitely generated and ϕ : M →M is anR-module-endomorphismof M , then we get the following equivalency

ϕ is bijective ⇐⇒ ∀m ∈ smaxR : ϕ/m is bijective

3.4 Ideal-induced Modules 185

Page 186: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.5 Block Decompositions

direkte Summen und Produkte von Algebren Blockzerlegung von Algebrenvgl. mein Skriptum zur Darstellungstheorie

pro: wenn sumni=1ei = 1 mit und eiej = δi,j dann A =

⊕ni=1Aei pfr:

existenz: a = a · 1 = a∑n

i=1Aei = aei ∈⊕n

i=1Aei eindeutigkeit: wenn∑ni=1 aiei = 0 dann 0 = (

∑ni=1 aiei) ej =

∑ni=1 aieiej =

∑ni=1 aiδi,j = aj

exp: Rn mit Euclidean basis e1, e2, . . . , en matn(R) funktioniert nicht!

186 3 Modules

Page 187: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.6 Dependence Relations

In algebra there are two seperate notions of dependence: linear dependenceand algebraic dependence. A subset S of a module is linearly dependent, ifwe can find a linear equation with elements of S that turns out 0. A subset Sof an algebra is algebraically dependent, if we can find a polynomial equationwith elements of S that turns out 0.

Linear dependence yields the notion of a basis of a module and we willsee that the module’s structure is determined by its basis. The same is truewith algebraic dependence: this yields the notion of a transcendence basis,which determines the structure of the algebra.

It is customary to approach these problems directly and hence seperately.Yet there is the underlying concept of dependence. So we will try a little moreabstract approach: first we will introduce the general notion of a dependencerelation. Then we will prove (among others) that any dependence relationadmits a basis. This will be accomplished in this section.

Consequently it suffices to show that linear dependence (over a skew-field) is a dependence relation, respectively that algebraic dependence (overa field) is a dependence relation. This will be discussed in the sectionsto come. This approach features the advantage that it neatly seperates set-theoretic and algebraic concepts, which yields beautiful clarity and pinpointsprecisely where the (skew-)field is involved.

(3.34) Definition:Let M be an arbitrary set, in the following we will regard relations ` of thefollowing form ` ⊆ P(M)×M . And for any such relation, elements x ∈Mand subsets S, T ⊆ M we will use the following notations

S ` x :⇐⇒ (S, x) ∈ `S 0 x :⇐⇒ ¬(S ` x)

〈S 〉 := x ∈M | S ` x sub(M) := 〈S 〉 | S ⊆ M T ` S :⇐⇒ S ⊆ 〈T 〉

⇐⇒ ∀x ∈ S : T ` x

Now ` is said to be a dependence relation on M , iff it is of the form` ⊆ P(M) ×M and for any elements x ∈ M and subsets S, T ⊆ M itsatisfies the following four properties:

(D1) any element x of S already depends on S, formally this can be put as

x ∈ S =⇒ S ` x

(D2) if x depends on S and S depends on T , then x already depends on T

T ` S, S ` x =⇒ T ` x

(D3) if x depends on S, then it already depends on a finite subset Sx of S

S ` x =⇒ ∃Sx ⊆ S : #Sx <∞ and Sx ` x

3.6 Dependence Relations 187

Page 188: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(D4) if x depends on S but not on S \ s (for some s ∈ S), then we caninterchange the dependencies of x and s, formally again

S ` x, s ∈ S, S \ s 0 x =⇒(

(S \ s ) ∪ x )` s

And in case that ` is a dependence realtion on M we further introduce thefollowing notions: a subset S ⊆ M is said to be independent (or moreprecisely `independent), iff it satisfies

∀ s ∈ S : S \ s 0 s

Clearly S is said to be dependent (or more precisely `dependent), iff it isnot independent (that is S \ s ` s for some s ∈ S). And a subset B ⊆ Mis said to be a basis (or more precisely a `basis) iff it is independent andgenerates M . Formally B is a basis, iff it satisfies

(B1) B is independent

(B2) 〈B 〉 = M

(3.35) Lemma: (viz. 362)Let M be an arbitrary set, S, T ⊆ M be arbitrary subsets and ` be adependence relation on M . Then we obtain all of the following statements

(i) The generation of subsets under ` preserves the inclusion of sets, i.e.

S ⊆ T =⇒ 〈S 〉 ⊆ 〈T 〉

(ii) Recall that we denoted sub(M) to be the set of all 〈T 〉 for some subsetT ⊆ M . Then for any subset S ⊆ M we obtain the identities

〈〈S 〉 〉 = 〈S 〉 =⋂P ∈ sub(M) | S ⊆ P

(iii) For any subset S ⊆ M the following three statements are equivalent

(a) S is `dependent

(b) ∀T ⊆ M : S ⊆ T =⇒ T is `dependent

(c) ∃S0 ⊆ S such that #S0 <∞ and S0 is `dependent

(iv) For any subset T ⊆ M the following three statements are equivalent

(a) T is `independent

(b) ∀S ⊆ M : S ⊆ T =⇒ S is `independent

(c) ∀T0 ⊆ T we get #T0 <∞ =⇒ T0 is `independent

(v) For any S ⊆ M and x ∈M the following statements are equivalent

(a) S ∪ x is `independent and x 6∈ S

(b) S is `independent and S 0 x

188 3 Modules

Page 189: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vi) Let Si ⊆ M be a chain (where i ∈ I of `idependent subsets of M ,then the union of the Si is `independent again. Formally that is

∀ i ∈ I : Si is `independent∀ i, j ∈ I : Si ⊆ Sj or Sj ⊆ Si

=⇒

⋃i∈I

Si is `independent

(vii) Let S ⊆ M be an `independent subset and T ⊆ M be a generatingsubset M = 〈T 〉. If now S ⊆ T , then there is some B ⊆ X such that

(1) S ⊆ B ⊆ T and

(2) B is a `basis of M

(viii) Let B ⊆ M be a `basis of X, b ∈ B and x ∈ M be arbitrary. Thenwe denote by B′ := (B \ b ) ∪ x the set B with b replaced by x.If now B′ ` b, then B′ is a `basis of M , as well.

(ix) A subset B ⊆ M is a `basis of M if and only if it is a minimal`generating subset of M . And this again is equivalent to being amaximal `independent subset. Formally that is the equivalency of

(a) B is a `basis of M

(b) B generates M (that is M = 〈B 〉) and for any S ⊆ M we obtain

S ⊂ B =⇒ M 6= 〈S 〉

(c) B is `independent and for any S ⊆ M we obtain the implication

B ⊂ S =⇒ S is not `independent

(x) All `bases of M have the same number of elements, formally that is

A,B ⊆ M `bases of M =⇒ |A| = |B|

(xi) If S ⊆ M is `independent and B ⊆ M is an `basis of M , then Scontains no more elements than B, that is |S| ≤ |B|.

(xii) Consider a `independent subset S ⊆ M and a `basis B ⊆ M ofM . Further assume that #B < ∞ is finite, then the following twostatements are equivalent

(a) |S| = |B|(b) S is an `basis of M

Nota that in the proof of this lemma we did not always require all the prop-erties of a dependence relation. Namely (i), (ii), (iii), (iv), the implication(a) =⇒ (b) in (v), (vi) and the equivalence (a) ⇐⇒ (b) in (ix) requirethe relation ` to satisfy properties (D1), (D2) and (D3) only.

3.6 Dependence Relations 189

Page 190: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.36) Remark:In item (vi) we may chose S = ∅, as this always is `independent respectivelyT = M , as always M = 〈M 〉. These special choices yield the so called basisselection respectively basis completion theorems (the converse implicationsfollow immediately from (i) and (ii) respectively). And as these are usedfrequently let us append them here. Thus suppose M is an arbitrary setand ` is a dependence relation on M again. Then we get

• Basis Selection Theorem: a subset T ⊆ M contains a `basis of M ifand only if it generates M , that is equivalent are

(a) 〈T 〉 = M

(b) ∃B ⊆ T : B is a `basis of M

• Basis Completion Theorem: a subset S ⊆ M can be extended to a`basis of M if and only if S is `independent, that is equivalent are

(a) S is `independent

(b) ∃B ⊆ M : S ⊆ B and B is a `basis of M

190 3 Modules

Page 191: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.7 Linear Dependence

The primary task of any algebraic theory is to classify the objects involved.The same is true for module theory: what types of modules do exist? Sur-prisingly the answer will strongly depend on the module’s base ring. A firstcomplete answer can be given in the case that the base ring is a skew-fieldS, then all modules over S are of the form S⊕I for some index set I. (Wewill later also find complete answers in case the base ring is a PID or DKD.But this is how far the theory carries).

In order to achieve this result we will have to define the notion of a basisof a module. Modules with basis will be said to be free. However most ofthe work is done already - all we have to do is prove that linear dependencetruly is a dependence relation, as we have introduced it in the section before.

(3.37) Definition:Let (R,+, ·) be an arbitrary ring an M be R-module. Further consider anarbitrary subset X ⊆ M . Then we introduce the following notions

(i) A subset X ⊆ M is said to generate M as an R-module (or simplyto R-generate M), iff the R-module generated by X is M . Formallythat is iff (where we also recalled proposition (3.11))

M = 〈X 〉m =

n∑i=1

aixi

∣∣∣∣∣ 1 ≤ n ∈ N, ai ∈ R, xi ∈ X

Now X is said to ba a minimal R-generating set, iff X R-generatesM but any proper subset of X does not R-generate M , formally iff

∀W ⊆ M : W ⊂ X =⇒ 〈W 〉m 6= M

(ii) We thereby define the rank - denoted by rank(M) - to be the minimalcardinality of an R-generating subset X of M . Formally that is

rank(M) := min |X| | X ⊆ M, 〈X 〉m = M

Nota that this is well defined, as the cardinal numbers are well-ordered - that is any non-empty set of cardinal numbers has a minimalelement. Also the defining set above is non-empty, as M ⊆ M andM = 〈M 〉m. In particular we have rank(M) ≤ |M |. Hence this defi-nition yields the following properties: if I = rank(M) then we get

(1) there is some X = xi | i ∈ I ⊆ M such that M = 〈X 〉m, and

(2) for any X ⊆ M with M = 〈X 〉m there is some injective mapι : I → X from I into X.

(iii) M is said to be finitely generated, iff it is R-generated by some finitesubset X ⊆ M . That is iff it satisfies one of the equivalent conditions

(a) rank(M) < ∞(b) ∃X ⊆ M : #X <∞ and M = 〈X 〉m

3.7 Linear Dependence 191

Page 192: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) A submodule P ≤m M is said to be cofinite (in M), iff it satisfiesone of the following two equivalent conditions

(a) M/P is finitely generated

(b) ∃X ⊆ M : #X <∞ and M = P + 〈X 〉m

Nota if X = x1, . . . , xn such that M = P + 〈X 〉m, then it is clearthat M/P is generated by the set xi + P | i ∈ 1 . . . n . Vice versa,if xi + P | i ∈ 1 . . . n generates M/P , then we get M = P + 〈X 〉m,by taking X = x1, . . . , xn . Let us prove this: given any x ∈ Mthere are ai ∈ R such that x+ P = a1x1 + · · ·+ anxn + P and hencex = p+ a1x1 + · · ·+ anxn ∈ P + 〈X 〉m for some p ∈ P .

(v) Let us now introduce a relation of the form ` ⊆ P(M) ×M calledthe relation of R-linear dependence. For any subset X ⊆ M andany element x ∈M let us define

X ` x :⇐⇒ ∃n ∈ N,∃x1, . . . , xn ∈ X,∃ a1, . . . , an ∈ Rsuch that x = a1x1 + · · ·+ anxn

Nota that we allowed n = 0 in this definition. And by convention theempty sum is set to be 0. Thus we get X ` 0 for any subset X ⊆ M .In fact we even get ∅ ` x ⇐⇒ x = 0 for any x ∈M .

(vi) A subset X ⊆ M is said to be R-linearly dependent iff for any1 ≤ n ∈ N, any x1, . . . , xn ⊆ X and any a1, . . . , an ∈ R we get

a1x1 + · · ·+ anxn = 0∀ i, j ∈ 1 . . . n : xi = xj =⇒ i = j

=⇒ a1 = · · · = an = 0

If this is not the case (i.e. there are some a1, . . . , an ∈ R \ 0 andpairwise distinct x1, . . . , xn ∈ X such that a1x1 + · · · + anxn = 0),then X is said to be R-linearly dependent. Finally X is said to bea maximal R-linearly independent set, iff X is R-linearly dependent,but any proper superset of X is R-linearly dependent. Formally

∀Y ⊆ M : X ⊂ Y =⇒ Y is R-linearly dependent

(vii) A subset B ⊆ M is said to be a R-basis of M , iff it is R-linearlyindependent (that is (iv)) and R-generates M (that is (i)). And M issaid to be free, iff it has a basis. That is

M free ⇐⇒ ∃B ⊆ M : B is a R-basis

(viii) Finally an ordered tupel (xi) ⊆ M (where i ∈ I) is said to be anordered R-basis of M , iff it satisfies the following two properties

(1) xi | i ∈ I is a R-basis of M

(2) ∀ i, j ∈ I : xi = xj =⇒ i = j

192 3 Modules

Page 193: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.38) Example: (viz. 366)

(i) A very special case in the above definitions is the zero-module 0 .It is a free module, as ∅ ⊆ 0 is an R-basis. And thereby for anarbitrary R-module M we obtain

rank(M) = 0 ⇐⇒ M = 0

(ii) From the above definitions it is clear, that M is finitely generated, ifand only if the zero-submodule 0 ≤m M is cofinite (in M). Also, ifP and Q ≤m M are submodules of M such that P is cofinite andP ⊆ Q, then Q is cofinite as well.

(iii) Consider an integral domain (R,+, ·) and an R-submodule (that is anideal) a ≤m R. Then a is a free R-module, if and only if it is principal

a free ⇐⇒ ∃ a ∈ R : a = aR

(iv) The standard example of a free module is the following: let (R,+, ·) bean arbitrary ring and I 6= ∅ be any non-empty set. Then we consider

R⊕I =⊕i∈I

R =

(xi) ∈ RI∣∣ # i ∈ I | xi 6= 0 <∞

Recall that this is an R-module under the component-wise operations(xi) + (yi) := (xi + yi) and a(xi) := (axi) - where a, ai and yi ∈ R.For any j ∈ I let us now denote the j-th Euclidean vector

ej :=(δi,j)∈ R⊕I

That is the i-th component of ej is 1 if and only if i = j and 0 otherwise.Then R⊕I is a free R-module having the Euclidean basis

E = ej | j ∈ I

(♦) In particular - if (S,+, ·) is a skew-field - we find that for any setI 6= ∅ the S-vecor-space S⊕I is of dimension

dim(S⊕I) = |I|

(3.39) Proposition: (viz. 366)Let (R,+, ·) be an arbitrary ring and M be an R-module. Let us denote therelation of R-linear dependence by ` again (refer to (3.37.(iv)). Then thefollowing statements hold true

(i) The relation ` satisfies the properties (D1), (D2) and (D3) of a de-pendence relation (refer to (3.34) for the explicit definitions).

(ii) If X ⊆ M is any subset and we denote 〈X 〉 := y ∈M | X ` y again, then we obtain the following identities

〈X 〉 = 〈X 〉m = lhR(X)

3.7 Linear Dependence 193

Page 194: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) For any subset X ⊆ M the following three statements are equivalent

(a) X is `dependent

(b) ∀Y ⊆ M : X ⊆ Y =⇒ Y is `dependent

(c) ∃X0 ⊆ X such that #X0 <∞ and X0 is `dependent

(iv) For any subset Y ⊆ M the following three statements are equivalent

(a) Y is `independent

(b) ∀X ⊆ M : X ⊆ Y =⇒ X is `independent

(c) ∀Y0 ⊆ Y we get #Y0 <∞ =⇒ Y0 is `independent

(v) A subset B ⊆ M is a `basis of M if and only if it is a minimal`generating subset of M . Formally that is the equivalency of

(a) B is a `basis of M

(b) M = 〈B 〉 and S ⊂ B =⇒ M 6= 〈S 〉

(vi) A subset B ⊆ M is an R-basis of M if and only if any x ∈ M hasa unique representation as an R-linear combination over B. Formallythat is the equivalency of

(a) B is an R-basis of M

(b) for any element x ∈ M there is a uniquely determined tupel(xb) ∈ R⊕B such that x can be represented in the form

x =∑b∈B

xbb

Nota that the sum occuring in (b) is well-defined, as only finitelymany of the xb are non-zero. Further note that in fact the existence ofsuch a representation is equivalent to M = 〈B 〉m and the uniquenessof the representation is equivalent to the R-linear independence of B.

(3.40) Proposition: (viz. 367)Let (S,+, ·) be a skew-field and M be an S-module. Let us denote therelation of S-linear dependence by ` again (refer to (3.37.(iv))). Then weobtain the following statements in addition to (3.39)

(i) ` is a dependence relation (in the sense of (3.34)), that even satisfiesthe following property (for any X ⊆ M and any x, y ∈M)

x ∈ X, X ` y, y 6= 0 =⇒(

(X \ x ) ∪ y )` x

(ii) Let X ⊆ M be an arbitrary subset, then the following two statementsare equivalent (for arbitrary rings we only get (b) =⇒ (a))

(a) X is `independent

(b) X is S-linearly independent

194 3 Modules

Page 195: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) For any X ⊆ M and y ∈M the following statements are equivalent

(a) X ∪ y is S-linearly independent and y 6∈ X(b) X is S-linearly independent and X 0 y

(iv) For any subset B ⊆ M all of the following statements are equivalent

(a) B is a `basis of M

(b) B is an S-basis of M

(c) B is a minimal S-generating subset of M , that is B generates M(i.e. M = lhS(B)) and for any X ⊆ M we obtain

X ⊂ B =⇒ M 6= lhS(X)

(d) B is a maximal S-linearly independent subset, that is B is S-linearly independent and for any X ⊆ M we obtain

B ⊂ X =⇒ X is not S-linearly independent

(e) for any element x ∈ M there is a uniquely determined tupel(xb) ∈ S⊕B such that x can be represented in the form

x =∑b∈B

xbb

(v) Let X ⊆ M be a S-linearly independent subset and Y ⊆ M be agenerating subset M = lhS(Y ). If now X ⊆ Y , then there is someB ⊆ M such that

(1) X ⊆ B ⊆ Y and

(2) B is an S-basis of M

(vi) All S-bases of M have the same number of elements, formally that is

A,B ⊆ M S-bases of M =⇒ |A| = |B|

Thus by (v) M has an S-basis (it is a free module) and by (vi) all S-basesof M share the same cardinality. Hence we may define the dimension ofM over S to be the cardinality of any S-basis of M . That is we let

dimS(M) := |B| where B is an S-base of M

(vii) A set of S-linearly elements contains at most dimS(M) many elements

X ⊆ M S-linearly independent =⇒ |X| ≤ dimS(M)

(viii) Suppose M is finite-dimensional (i.e. dimS(M) < ∞) and consideran S-linearly independent subset X ⊆ M . Then the following twostatements are equivalent

(a) |X| = dimS(M)

(b) X is an S-basis of M

3.7 Linear Dependence 195

Page 196: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ix) Im particular if M is finite-dimensional and P ≤m M is a submoduleof equal dimension dim(P ) = dim(M) <∞ then already P = M .

(3.41) Lemma: (viz. 368)Let (S,+, ·) be a skew-field and consider two S-modules V and W . Thenwe obtain the following formulae concerning their dimensions

(i) If P and Q ≤m V are any two S-submodules of V , then the dimensionof the sum P +Q satisfies the following relation

dim(P +Q) = dim(P ) + dim(Q)− dim(P ∩Q)

(ii) If P ≤m V is an S-submodule of V then the dimension of the quotientis just the difference of the dimensions of V and P

dim(V/P

)= dim(V )− dim(P )

(iii) If ϕ : V → W is any S-linear map, then then we obtain the followingdimension formula for the dimension of the kernel and image of ϕ

dim(V ) = dim(kn(ϕ)) + dim(im(ϕ))

196 3 Modules

Page 197: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.8 Homomorphisms

Homomorphism of modules - better known as linear maps - are one of themost common concepts of algebra. Examples are abundant: derivation andintegration of functions both are linear, the arithmetical mean is linear, ma-trices are just another way to define linear maps, etc. The advantage oflinear maps is, that they are both, well-understood and well-behaved. Infact this concept is so successful, that many theories (e.g. chaos theory)thrive on finding linear substitutes for non-linear problems.

(3.42) Definition:

(i) Let (R,+, ·) be any ring and consider two R-modules M and N . Thena map ϕ : M → N is said to be an R-module-homomorphism orsimply an R-linear map, iff for any a ∈ R and any x, y ∈M we get

ϕ(ax) = aϕ(x)

ϕ(x+ y) = ϕ(x) + ϕ(y)

We will denote the set of all R-module-homomorphisms from M toN by mhom(M,N) or, in order to emphasize the base ring R, bymhomR(M,N). In linear algebra (i.e. primarily in case of a base fieldR) it is also customary to simply write L(M,N):

L(M,N) := ϕ : M → N | ϕ is an R-module-homomorphism

mhom(M,N) := mhomR(M,N) := L(M,N)

(ii) Consider any two rings (R,+, ·) and (S,+, ·) and fix a ring-homo-morphism α : R → S. Then a map ϕ : M → N is said to be anα-module-homomorphism or simply an α-linear map, iff for anya ∈ R and any x, y ∈M we get

ϕ(ax) = α(a)ϕ(x)

ϕ(x+ y) = ϕ(x) + ϕ(y)

We will denote the set of all R-module-homomorphisms from M to Nby mhomα(M,N) or, in case α is understood, simply by mhom(M,N).In linear algebra (i.e. primarily in case of a base field R) it is alsocustomary to write Lα(M,N):

Lα(M,N) := ϕ : M → N | ϕ is an α-module-homomorphism

mhom(M,N) := mhomα(M,N) := L(M,N)

(iii) In any case (i) and (ii) we define the kernel knϕ and the image imϕto be the following subsets of M or N respectively

kn(ϕ) := x ∈M | ϕ(x) = 0 ⊆ M

im(ϕ) := ϕ(x) | x ∈M ⊆ N

3.8 Homomorphisms 197

Page 198: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And we establish the usual nomenclature: ϕ is said to be a mono- epi-or isomorphism, iff it is injective, surjective or bijective respectively.

ϕ is called iff ϕ is

monomorphism injectiveepimorphism surjectiveisomorphism bijective

(iv) In case (i), an R-module-homomorphism ϕ : M →M acting solely onthe R-module M is called an R-module-endomorphism. And it iscalled an R-module-automorphism, iff it also is bijective

ϕ is called iff ϕ is

endomorphism M = Nautomorphism M = N and bijective

(v) If M and N are any two R-modules, then M and N are said to beisomorphic, iff there is an isomorphism of R-modules from M to N ,formally we write

M ∼=m N :⇐⇒ ∃Φ : M → N R-module isomorphism

(3.43) Example:

(i) The most easy and completely trivial example is the zero-homorphism.If M and N are any two R-modules, then we obtain an R-module-homorphism 0 by letting

0 : M → N : x 7→ 0

(ii) Another trivial exapmle of an R-module-homorphism is the identitymap 11 on some R-module M . This simply is

0 : M →M : x 7→ x

(iii) A slight generalisation is the map of set theoretical inclusion, which (byabuse of notation) is written as ⊆ again. If P ≤m M is a submoduleof the R-module M , then we get an R-module-homorphism by

⊆ : P →M : x 7→ x

(3.44) Remark: (viz. 344)

(i) The notion (ii) of α-linearity clearly includes the case (i) of R-linearity- just take α = 11 : R→ R, then an α-linear map ϕ : M → N is nothingbut an R-linear map.

(ii) If ϕ : M → N is an R-module-homomorphism, then the kernel andimage of ϕ are submodules of M and N respectively:

kn(ϕ) = x ∈M | ϕ(x) = 0 ≤m M

im(ϕ) = ϕ(x) | x ∈M ≤m N

198 3 Modules

Page 199: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

More generally: if ϕ : M → N is an α-module-homomorphism thenknϕ is an R-submodule of M . And if α is surjective, then imϕ also isan S-submodule of N .

(iii) If ϕ : M → N is an R-module-homomorphism, then ϕ has image Nand ϕ is injective if it has kernel 0, formally that is

ϕ is surjective ⇐⇒ imϕ = N

ϕ is injective ⇐⇒ knϕ = 0

(iv) If L, M and N all are R-modules and the maps ϕ : L → M andψ : M → N both are homorphisms of R-modules then the compositionψϕ is a homomorphism of R-modules as well

ψϕ : L→ N : x 7→ ψ(ϕ(x))

More generally: suppose L is an R-module, M is an S-module andN is an T -module, where (R,+, ·), (S,+, ·) and (T,+, ·) are arbitraryrings. Further we fix ring homorphisms α : R → S and β : S → T .If now ϕ : L → M is an α-module-homorphism and ψ : M → N isan β-module-homorphism, then the composition ψϕ : L → N is an(βα)-module-homorphism.

(v) If M and N are R-modules, where (R,+, ·) is a commutative ring, thenthe set mhomR(M,N) of all R-module-homorphisms from M to Nbecomes an R-module in itself. Thereby for any ϕ, ϕ ∈ mhomR(M,N)and any a ∈ R we take the following operations

ϕ+ ψ : M → N : x 7→ ϕ(x) + ψ(x)

aϕ : M → N : x 7→ aϕ(x)

(vi) More generally: suppose (R,+, ·) is an arbitrary and (S,+, ·) even is acommutative ring, M is an R-module and N is an S-module. Furtherfix a homorphism α : R → S of rings. Then mhomα(M,N) becomesan S-module under the above operations of (v).

(vii) Consider any non-empty subset ∅ 6= X ⊆ M of M . For now we willexamine the canonical R-module-homomorphism

% : R⊕X →M : (ax) 7→∑x∈X

axx

Note that % is well-defined, as in any (ax) ∈ R⊕X only finitely many axare non-zero. Also - as the compositions on R⊕X are component-wise- it is clear, that % truly is a homomorphism of R-modules. Yet % alsosatisfies a number of interesting properties

im% = lhR(X)

% is surjective ⇐⇒ X generates M

% is injective ⇐⇒ X is R-linearly independent

% is bijective ⇐⇒ X is an R-basis of M

3.8 Homomorphisms 199

Page 200: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

In particular, iff B ⊆ M is a basis, then % is an isomorphy of R-modules % : R⊕B ∼→M . That is all free R-modules are of the formR⊕B for some set B.

(3.45) Proposition: (viz. 346)Let (R,+, ·) be an arbitrary ring and consider any two R-modules M andN . Then we get the following two statements:

(i) If X ⊆ M generates M , that is M = ln(X), and ϕ, ψ : M → N areany two R-linear maps, then we find the equivalency

ϕ∣∣X

= ψ∣∣X⇐⇒ ϕ = ψ

(ii) And if B ⊆ M even is an R-basis of M and n : B → N is anarbitrary map from B to N , then there is a uniquely defined R-linearmap ϕ : M → N satisfying ϕ

∣∣B

= n. (It is customary to say that ϕ isgenerated by linear expansion of the map n). And this is given to be

ϕ

(∑x∈B

axx

):=

∑x∈B

axn(x)

(3.46) Definition: (viz. 346)

(i) Fix any ring (R,+, ·) and consider the three R-modules L, M and N .If ψ : M → N is an R-module-homorphism, then it induces anotherR-module-homorphism ψ∗ called the push-forward of ψ by letting

ψ∗ : mhomR(L,M)→ mhomR(L,N) : ϕ 7→ ψϕ

(ii) Fix any ring (R,+, ·) and consider the three R-modules L, M and N .If ϕ : L → M is an R-module-homorphism, then it induces anotherR-module-homorphism ϕ∗ called the pull-back of ϕ by letting

ϕ∗ : mhomR(N,M)→ mhomR(L,N) : ψ 7→ ψϕ

(iii) More generally: suppose L is an R-module, M is an S-module andN is an T -module, where (R,+, ·), (S,+, ·) and (T,+, ·) are arbitraryrings. Further we fix ring homorphisms α : R → S and β : S → T . Ifnow ψ : M → N is an β-module-homorphism, then it induces anotherβ-module-homorphism ψ∗ called the push-forward of ψ by letting

ψ∗ : mhomα(L,M)→ mhomβα(L,N) : ϕ 7→ ψϕ

(iv) More generally: suppose L is an R-module, M is an S-module andN is an T -module, where (R,+, ·), (S,+, ·) and (T,+, ·) are arbitraryrings. Further we fix ring homorphisms α : R → S and β : S → T .If now ϕ : L → M is an α-module-homorphism, then it induces aT -module-homorphism ϕ∗ called the pull-back of ϕ by letting

ϕ∗ : mhomβ(M,N)→ mhomβα(L,N) : ψ 7→ ψϕ

200 3 Modules

Page 201: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

The following statement seems to be a mere example - barely worth men-tioning: fixing an endomorphism ϕ of a module M over a commutative ringR we can turn M into a module over the polynomial ring R[t]. Yet thisprobably is the most important construction of linear algebra. Analyzingthe structure of this module will grant deep insights into the homorphism ϕ.In fact the entire theory of canonical forms of an endomorphism (e.g. Jordancanonical form) will arise by applying the theorem of Prufer to this module.

(3.47) Lemma: (viz. 348)Let (R,+, ·) be a commutative ring, consider an R-module (M,+, ) andfix an R-module-endomorphism ϕ : M → M . Then the set M can beturned into a module (M,+, ) over the polynomial ring R[t] by retainingthe addition + of M and establishing a new scalar multiplication

: R[t]×M →M : (f, x) 7→ f(ϕ)(x)

Again we write fx := f x = f(ϕ)(x). More explictly: if f ∈ R[t] is anypolynomial over R then f can be written in the form if f = ant

n+ · · ·+a1t+a0 ∈ R[t]. If now x ∈M is any element of M , then the scalar multiplicationis set to be

fx := f(ϕ)(x) =

∞∑k=0

f [k]ϕk(x) = anϕn(x) + · · ·+ a1ϕ(x) + a0x

(3.48) Lemma: (viz. 396) (♦) Cayley-Hamilton Theorem

(i) Let (R,+, ·) be commutative ring and A ∈ matn(R) be an n×n matrixover R. Pick up the characteristic polynomial of A, that is

c(t) := det(t 11 n −A) ∈ R[t]

By construction c(t) is a normed polynomial of degree n that satisfiesc(0) = (−1)n det(A). Then the matrix A satisfies the characteristicpolynomial, that is we get

c(A) =

n∑k=0

c[k]Ak = 0 ∈ matn(R)

(ii) If (R,+, ·) is a commutative ring again, M is a finitely generated R-module and ϕ : M →M is an R-module-endomorphism, then there isa polynomial f(t) ∈ R[t] of deg(f) = rank(M) such that ϕ satisfies f

f(ϕ) =

deg(f)∑k=0

f [k]ϕk = 0 ∈ end(M)

3.8 Homomorphisms 201

Page 202: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.49) Definition:Let (R,+, ·) be any ring and consider a family (Mi) (where i ∈ I) of R-modules. Let finally M be a submodule of the direct product

M ≤m

∏i∈I

Mi

(i) Let N be another R-module, then a map ϕ : M → N is said to beR-multilinear, iff for any i ∈ I the composition with the canonicalembedding ϕιi : Mi → N is R-linear. That is for any i ∈ I, any a ∈ Rand any xi, yi ∈Mi we get

ϕ(. . . , axi, . . . ) = aϕ(. . . , xi, . . . )

ϕ(. . . , xi + yi, . . . ) = ϕ(. . . , xi, . . . ) + ϕ(. . . , yi, . . . )

(ii) Let (S,+, ·) be another ring, let N be an S-module and fix a ring-homomorphism α : R → S. Then a map ϕ : M → N is said to beα-multilinear, iff for any i ∈ I the composition with the canonicalembedding ϕιi : Mi → N is α-linear. That is for any i ∈ I, any a ∈ Rand any xi, yi ∈Mi we get

ϕ(. . . , axi, . . . ) = α(a)ϕ(. . . , xi, . . . )

ϕ(. . . , xi + yi, . . . ) = ϕ(. . . , xi, . . . ) + ϕ(. . . , yi, . . . )

(3.50) Example: (viz. 346)Let (R,+, ·) be any commutative ring, then we can simultaneously regardR2 = R ⊕ R as an R-module, or as a product of the R-module R itself,twice. In particular we can consider both R-linear and R-bilinear maps onR2. Our examples now are:

(i) Consider the addition map α on R, that is α : R2 → R : (a, b) 7→ a+b.Then α is R-linear but need not be R-multilinear.

(ii) Consider the multiplication map µ on R, that is µ : R2 → R : (a, b) 7→ab. Then µ is R-multilinear but need not be R-linear.

202 3 Modules

Page 203: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.9 Isomorphism Theorems

In this section we will study several basic properties of R-module-homorphi-sms. First of all we will encounter the fundamental isomorphism theorems.Later on we will get some more advanced criteria whether an endomorphismis an isomorphism. A more general way to look at a homorphism is to studya chain of homorphisms. This notion will also be introduced here.

(3.51) Theorem: (viz. 347)Let (R,+, ·) be any ring and consider any two R-modules M and N . Letfurther ϕ : M → N be an R-module-homomorphism, then we get

(i) If P ≤m M and Q ≤m N are submodules of M and N respectively,such that P ⊆ ϕ−1(Q), then we obtain a well-defined homomorphismof R-modules, by letting

ϕ : M/P →

N/Q : x+ P 7→ ϕ(x) +Q

(ii) 1st Isomorphism Theorem: ϕ induces an isomorphy of R-modulesfrom M/knϕ to imϕ by virtue of

ϕ : M/knϕ

∼→ imϕ : x+ knϕ 7→ ϕ(x)

(iii) 2nd Isomorphism Theorem: if P and Q ≤m M are any twosubmodules of M , we obtain another isomorphy of R-modules, byvirtue of

Q/P ∩Q

∼→ P +Q/P : x+ P ∩Q 7→ x+ P

(iv) 3rd Isomorphism Theorem: if P and Q ≤m M are submodulesof M again, such that P ⊆ Q, then we obtain one more isomorphy ofR-modules, by virtue of

M/Q∼→ M/P/

Q/P : x+Q 7→ (x+ P ) +Q/P

(3.52) Corollary: (viz. 400)

(i) Consider a commutative ring (R,+, ·) and a finitely-generatedR-moduleM . If now ϕ : M → M is an R-module-endomorphism, then the fol-lowing two statements are equivalent:

(a) ϕ is surjective

(b) ϕ is bijective

(ii) Consider a skew-field (S,+, ·) and a finitely generated S-module V . Ifnow ϕ : V → V is an S-module-endomorphism, then the following twostatements are equivalent:

(a) ϕ is injective

(b) ϕ is surjective

(c) ϕ is bijective

3.9 Isomorphism Theorems 203

Page 204: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.53) Definition:Fix any number 1 ≤ n ∈ N and a ring (R,+, ·). Further consider the R-modules Mi (where i ∈ 1 . . . n + 1) and the R-module-homomorphisms, ϕi(where i ∈ 1 . . . n). Let us denote M = (Mi) and ϕ = (ϕi), then the orderedpair (M,ϕ) is called a (finite) sequence of R-modules, iff we get

(1) for any i ∈ 1 . . . (n+ 1): Mi is an R-module

(2) for any i ∈ 1 . . . n: ϕ : Mi →Mi+1 is an R-module homorphism

(3) for any i ∈ 1 . . . (n − 1): we have ϕi+1ϕi = 0. Nota an equivalentformulation of the latter property is: im(ϕi) ⊆ kn(ϕi+1)

In this situation we will use a rather intuitive notation for the ordered pair(M,ϕ), namely we write:

M1ϕ1−→ M2

ϕ2−→ M3 → · · · →Mnϕn−→ Mn+1

And the finite sequence (M,ϕ) of R-modules is said to be exact, iff apartfrom properties (1) and (2) we even have the following, stronger property(3’) for any i ∈ 1 . . . (n− 1)

im(ϕi) = kn(ϕi+1)

(3.54) Remark:

(i) If Mi = 0, then we necessarily have ϕi = 0 and ϕi−1 = 0, that isϕi : 0 → Mi+1 : 0 7→ 0 and ϕi−1 : Mi−1 → 0 : x 7→ 0. As these mapsare already determined, we don’t have to denote them in the sequence

Mi−2ϕi−2−→ Mi−1 → 0→Mi+1

ϕi+1−→ Mi+2

If this sequence is exact, then it even follows that ϕi+1 is injective andϕi−2 is surjective, formally put

Mi = 0 =⇒ ϕi+1 is injective

Mi = 0 =⇒ ϕi−2 is surjective

Prob due to the exactness of the chain we get im(ϕi−2) = kn(ϕi−1) =kn(0) = Mi−1 and hence ϕi−2 is surjective. Likewise kn(ϕi+1) =im(ϕi) = im(0) = 0 and hence ϕi+1 is injective.

(ii) In particular for any R-module homorphism ϕ : M → N we get achain of R-modules, simply by taking

0→Mϕ−→ N → 0

And this chain is exact if and only if ϕ is an isomorphism. Prob dueto (i) the chain is exact, iff ϕ is injective and surjective, that is iff ϕ isbijective. And that’s when ϕ is called an isomorphism.

204 3 Modules

Page 205: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) In a similar fashion we find a chain-version of the first isomorphismtheorem: given an exact sequence of R-modules

0→ Lϕ−→ M

ψ−→ N → 0

the first isomorphism theorem of R-modules yields the following iso-morphy (Prob as im(ψ) = N and kn(ψ) = im(ϕ))

M/im(ϕ)

∼→ N : x+ im(ϕ) 7→ ψ(x)

(3.55) Example:

(i) Any R-module-homorphism ϕ : M → N induces a canonical shortexact sequence of R-modules, simply by taking

0→ kn(ϕ)⊆−→ M

ϕ−→ im(ϕ)→ 0

(ii) But ϕ also induces another chain of R-modules. Let us denote thecanonical projection π : N → N/im(ϕ) : y 7→ y + im(ϕ), then we findanother exact chain of R-modules, by taking

0→ kn(ϕ)⊆−→ M

ϕ−→ Nπ−→ N/

im(ϕ)→ 0

(iii) Another, most important example is the following: consider any twoR-modules M and N . Let us denote the canonical embedding of Mby ι : M → M ⊕N : x 7→ (x, 0) and the canonical projection onto Nby π : M ⊕N → N : (x, y) 7→ y. Then it is clear that im(ι) = M ⊕0 =kn(π), such that we get a short exact chain of R-modules, by

0→Mι−→ M ⊕N π−→ N → 0

(iv) A special case of (iii) is the following: consider any two distinct primenumbers 2 ≤ p 6= q ∈ Z. By the chinese remainder theorem we getZp ⊕ Zq = Zpq. Thereby the canonical maps become ι : Zp → Zpq :a+ pZ 7→ aq + pqZ and π : Zpq → Zq : b+ pqZ 7→ b+ qZ. And hencewe obtain the short exact sequence of Z-modules

0→ Zpι−→ Zpq

π−→ Zq → 0

(3.56) Lemma: (viz. 403)Let (R,+, ·) be a commutative ring then the following statements are true

(i) If R 6= 0 is non-zero and I, J 6= ∅ are any two non-empty sets, thenR⊕I is isomorphic (as an R-module) to R⊕J iff I and J have the samecardinality, formally that is

R⊕I ∼=m R⊕J ⇐⇒ |I| = |J |

3.9 Isomorphism Theorems 205

Page 206: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) Let a, b i R be two ideals in R, if now R/a and R/b are isomorphicas R-modules, then a and b already are equal

R/a∼=m

R/b =⇒ a = b

(3.57) Remark:

• The statement of (i) need not be true for non-commutative rings R.As an example fix any ring R, let M := R⊕N and take to the ring ofR-module endomorphisms of M

S := end(M)

Then for any m, n ∈ N we find that Sm ∼=m Sn are isomorphicas S-modules. In fact it suffices to give a basis of S consisting oftwo elements. And these can be constructed using the fact, that Shas infinite dimension - for details refer to [Scheja, Storch, Algebra1, Kapitel 25 Beispiel 3 und Kapitel 36 Beispiel 2]. We would alsolike to sketch a less constructive approach: Let again M := Z⊕N andS := end(M). It is easy to see M ∼=m M ⊕M and hence

S = mhom(M,M) ∼=m mhom(M,M ⊕M)∼=m mhom(M,M)⊕mhom(M,M) = S2

• A ring R with the property of (i) is also called to have IBN (invariantbasis number). Thus (i) may be expressed as commutative rings haveIBN. Of course there is a multitude of non-commutative rings thathave IBN as well. An important example is: if ϕ : R S is aring-epimorphism and S has IBN then R has IBN, as well. For morecomments on this, please refer to the first chapter of [Lam].

• Likewise the converse of (ii) is false in general - e.g. regard S := R[s, t].If we now define the ideals a := tS and b := stS then a and b clearlyare isomorphic as R-modules, but we find

S/a∼=a R[s]

S/b∼=a R[s]⊕ tR[t]

• In (3.62.(ii)) we will present a much stronger statement. In fact any R-basis of M has cardinality rank(M) - supposed R is commutative. Andas the isomorphism R⊕I ∼=m R⊕J transfers bases we could conclude

|I| = rank(R⊕I) = rank(R⊕J) = |J |

Yet the proof of this statement is far longer and uses substantiallymore tools (the theory of noetherian modules), hence we have chosento give a more direct proof in advance.

206 3 Modules

Page 207: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.10 Rank of Modules

Remember that in section 3.7 we defined the rank of an R-module to bethe minimal cardinality of a generating set of M , formally

rank(M) := min |X| | X ⊆ M, M = lh(X)

That is: if I = rank(M) then there is some generating subset mi | i ∈ I ⊆M such that M = lh(mi | i ∈ I). And conversely, if X ⊆ M generates Mas an R-module M = lh(X), then |I| ≤ |X|.

Be careful however - a minimal set of generators, need not have minimalcardinality ! E.g. the set 2, 3 ⊆ Z is a minimal set of generators of theZ-module Z. That is 2, 3 generates Z = 〈2, 3 〉m and of course neither 2 nor 3 generates Z. However we have rank(Z) = 1, as Z = 〈1 〉mis clear. As we have seen in (3.40.(iv)) this is quite different in the case ofmodules over skew-fields, but even the base ring Z (and what a better in isthere to wish for?) gives rise to such peculiarities.

Let us now study the basic properties of the rank of a module, but beready for some nasty surprises, as the theory will take some twists and turns.Many properties one would think of as evident will be untrue unless we dealwith nice rings and modules. Others will hold true in astounding generality.

(3.58) Example: (viz. 404)

(i) The ring Z is an UFD (in fact even a PID) that contains infinitelymany prime elements. That is the following set is infinite

P := p ∈ Z | 2 ≤ p is prime

(ii) The set Q regarded as an Z-module is has no finite set of generators.Hence it is of countably infinite rank: a generating set is given by

P :=

1

pk| p ∈ P, k ∈ N

(3.59) Proposition: (viz. 405)Let (R,+, ·) be an arbitrary ring and consider any two R-modules M and N .If now ϕ : M N is a surjective R-linear map (an R-module epimorphism),then

rank(N) ≤ rank(M)

Likewise, if P ≤m M is an R-submodule then we get the following inequality

rank(M) ≤ rank(P ) + rank(M/

P

)

3.10 Rank of Modules 207

Page 208: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.60) Proposition: (viz. 409)Let (R,+, ·) be a commutative, non-zero ring R 6= 0. If now M and N arefree R-modules, then M and N are isomorphic as R-modules, if and only ifthey share the same rank, formally that is

M ∼=m N ⇐⇒ rank(M) = rank(N)

In particular, if M is free of rank I = rank(M), then M is isomorphic tothe euclidean R-module M ∼=m R⊕I presented in (3.38).

(3.61) Remark:The above proposition (3.59) enables us to control the rank of M by con-troling the rank of a submodule P ≤m M (and of the quotient M/P ). Onthe other hand one is tempted to expect that the rank of a submodule Pof M cannot exceed the rank of the containing module M . However noth-ing is farther from the truth. Consider the ring R = Q[ti | I] where I isan arbitrary index set. If I = 1 . . . n then we have the usual polynomialring Q[t1, . . . , tn] that is a very tame ring - a noetherian integral domain.If we now regard the ring M = R as an R-module itself, it is clear, thatrank(M) = 1. However, hell breaks loose, once we take a look at the idealm := 〈ti | i ∈ I 〉i i R. If we take m as an R-submodule of M = R, thenthe rank of m is no less than

rank(M) = 1 ≤ |I| = rank(m)

By primary decomposition it is easy to see, that T := ti | i ∈ I is a min-imal generating set m = lhRT and in fact it hits the rank. That is thesubmodule m of M can have arbitrarily large rank, even though M is ofrank 1! Modules can have astoundligly vast space within themselves! Thereason is that all those ti do not take up much space, they are all linearlydependent: tjti − titj = 0.

So the question arises how to control the space inside a module. One of thebest answers one can give refers to free submodules of M - these cannot belarger than M itself - refer to (3.62.(iii)). Also note that as long as I is finiteR is noetherian. And in fact submodules of finitely generated modules overnoetherian rings are again finitely generated. This will be the topic of section3.11. Finally note that in case of |I| = 1 we even had rank(m) = rank(M).But in this case R even is a PID and in fact finitely generated modules overPIDs are very well-behaved.

(3.62) Lemma: (viz. 406)Consider a commutative ring (R,+, ·) and an R-module M . Further sup-pose, that R 6= 0 is non-zero. Then the following statements are true

(i) If x1, . . . , xn ⊆ M is an R-generating set of M (where n ∈ N),than any n + 1 elements of M are R-linearly dependant. That is:given y1, . . . , yn, yn+1 ∈ M there are some a1, . . . , an, an+1 ∈ R suchthat ak 6= 0 for some k ∈ 1 . . . n+ 1 and:

a1y1 + · · ·+ anyn + an+1yn+1 = 0

208 3 Modules

Page 209: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) If B ⊆ M is an R-basis of M then the cardinality of B equals therank of M , formally that is

rank(M) = |B|

(iii) If F ≤m M is a free submodule of M , then it’s rank does not exceedthe rank of M , formally that is

rank(F ) ≤ rank(M)

(3.63) Remark: DimensionIn general, for an arbitrary ring (R,+, ·) and a free R-module M the di-mension is set to be the minimal cardinality of a basis of M

dim(M) := min |B| | B ⊆ M is a basis of M

From this definition it is clear that the rank of M does not exceed thedimension of M [because any base B of M already generates M = lh(B)]

rank(M) ≤ dim(M)

In this light lemma (3.62.(ii)) tells us that in case of a commutative ring Rall bases of a free R-module M share the same cardinality, namely the rankof M . In particular - if R is commutative and M is free - we even have

dim(M) = rank(M)

Thus - for commutative rings - the dimension is not a new invariant. How-ever it is customary to refer to the dimension, rather than the rank, whenoperating with free modules. Also we have seen in (3.40.(vi)) that for anyskew-field (S,+, ·) and any S-vector-space V all bases share the same car-dinality, as well. It is easy to see that this is the rank again

dim(V ) = rank(V )

Prob as rank(V ) ≤ dim(V ) is always true, it remains to prove dim(V ) ≤rank(V ). Thus let X ⊆ M be a generating set of M with |X| = rank(M).Then by the basis selection theorem (3.36) there is a subset B ⊆ X suchthat B is a basis of M . In particular dim(S) = |B| ≤ |X| = rank(X).

After these two important cases, commutative rings and skew-fields, onmight be tempted to think that generally (over arbitrary rings) all basesshare the same cardinality, preferably the rank of M . This is untrue how-ever. There are some modules over non-commutative rings, that have basesof different size. We have given one such example in (3.57).

3.10 Rank of Modules 209

Page 210: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.64) Definition:Let (R,+, ·) be an arbitrary ring and M be an R-module. Then a subsetL ⊆ M is said to be a maximal linear independent set of M , iff it is R-linearly independent and for any other R-linearly independent set L′ ⊆ Mwe get the implication

L ⊆ L′ =⇒ L = L′

As we will see in (3.65.(iv)), if R is an integral domain then all maximallinear independent subsets of M are of the same cardinality. Hence in thiscase we may define the linear freedom of M to be

free(M) := |L| for some maximal linear independent set L ⊆ M

(3.65) Corollary: (viz. 409)Let (R,+, ·) be an arbitrary ring and M be an R-module. Then we find thefollowing statements concerning maximal linear independent subsets of M

(i) Any linear independent set in M is contained in a maximal linearindependent set. That is: if L ⊆ M is an R-linearly independent set,then there is a maximal linear independent set L∗ ⊆ M such that

L ⊆ L∗

(ii) If R 6= 0 and L is a maximal linear independent set and x ∈M is anyelement of M , then there is some a ∈ R such that

a 6= 0 and ax ∈ lh(L)

(iii) If L ⊆ M is a linear independent set, then the submodule F := lh(L)generated by L is a free R-module with basis L. In particular if R 6= 0is a non-zero commutative ring then the cardinality of any any linearindependent set L in M is bounded by the rank of M . That is

|L| ≤ rank(M)

(iv) In a more fancy language (ii) can be put as: If R 6= 0 is a non-zeroring and L is a maximal linear independent set, with F := lh(L) thenthe quotient M/F is a torsion R-module. That is

∀ z ∈M/F ∃ a ∈ R such that a 6= 0 and az = 0

Conversely if F ≤m M is a free submodule ofM such that the quotientM/F is a torsion module, then any basis B ⊆ F is a maximal linearindependent set B ⊆ M in M .

(v) Let now R even be an integral domain and consider a linearly indepen-dent set L ⊆ M . Further for any x ∈ L let ax ∈ R be any non-zeroelement ax 6= 0 of R. Then the set L′ := axx | x ∈ L is linearlyindependent again and |L| = |L′|.

210 3 Modules

Page 211: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vi) If R even is an integral domain, then any two maximal linear indepen-dent sets have the same cardinality. That is: if K and L ⊆ M areany two maximal linear independent sets, then

|K| = |L| ≤ rank(M)

(vii) If R is an integral domain and P ≤m M is any submodule of M , thenlinear freedom of M equals the sum of the linear freedoms of P andthe quotient M/P , formally that is

free(M) = free(P ) + free(M/

P

)(viii) (♦) Let R be an integral domain and denote its quotient field by

E := quotR. Then the linear freedom of M is just the dimension ofthe scalar extension of M to E, formally that is

freeR(M) = dimE(E ⊗R M)

(3.66) Remark:

(i) It is clear that in the case of a free R-module M over an integraldomain R the linear freedom equals the dimension, as any basis of Malready is a maximal linear independent subset

M free =⇒ free(M) = dim(M)

However, if M is not free, then the rank of M may well be larger, thanthe linear freedom. The following two examples (ii) and (iii) shouldenlighten the situation

gererally =⇒ free(M) ≤ rank(M)

(ii) Consider the Z-module M := Zf ⊕ (Zn)k for some arbitrary numbersf , k ∈ N. As F := Zf ⊕ 0 ≤m M is free and M/F = 0 ⊕ (Zn)k isa torsion module (3.65.(iv)) implies that the linear freedom of M isprecisely f . However (Zn)k contributes to the rank of M and hence

f = free(M) ≤ rank(M) = f + k

This is to be interpreted in the following way: the rank yields a moreaccurate description of the module, hence (3.60) - it regards every partof M . The linear freedom only is the dimension of the largest freemodule inside M , it neglects components with torsion. This makesit is easier to control, whereas the rank can explode incontrollably bypicking up torsion. Just compare (3.65.(vii)) to (3.59).

3.10 Rank of Modules 211

Page 212: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) Consider the integral domain R := Z[t] and regard the (maximal) idealm := 〈2, t 〉i iR as an R-module. Then m has a linear freedom of 1, as t is a maximal linear independent set (any other f ∈ R would yielda linearly dependent set). The rank is 2 however (m is not a principalideal). This can happen, as R/m ∼=m Z2 has torsion. Also note thatm is not a free module, just because it is not a principal ideal.

212 3 Modules

Page 213: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.11 Noetherian Modules

In the previous section we have seen, that the size (in terms of the rank)of a submodule P of M can be very hard to control. However there arethose well-behaved modules for which the rank remains finite at last. As itturns out these modules have a whole bunch of nice propoerties that we willexplore in this section:

(3.67) Definition: (viz. 382)Let (R,+, ·) be commutative ring and M be an R-module. Then M is said tobe a noetherian module, iff it satisfies one of the following three equivalentconditions:

(a) M satisfies the ascending chain condition of submodules - that is everyascending chain of submodules of M becomes stationary. I.e. anysequence (Pk) (where k ∈ N) of submodules Pk ≤m M that satisfies

P0 ⊆ P1 ⊆ . . . ⊆ Pk ⊆ Pk+1 ⊆ . . .

becomes eventually constant - that is there is some s ∈ N such that

P0 ⊆ P1 ⊆ . . . ⊆ Ps = Ps+1 = . . .

(b) Every non-empty P 6= ∅ family P ⊆ subm (M) of submodules of Mcontains a maximal element. I.e. there is some P ∗ ≤m M such that

∀ P ≤m M : P ∗ ⊆ P =⇒ P = P ∗

(c) Every submodule P of M is finitely generated. That is for any sub-module P ≤m M there are some x1, . . . , xn ⊆ P such that

P = lh(x1, . . . , xn)

(3.68) Definition:Let (R,+, ·) be commutative ring and M be an R-module. Then M is saidto be a artinian module, iff it satisfies one of the following two equivalentconditions:

(a) M satisfies the descending chain condition of submodules - that is ev-ery descending chain of submodules of M becomes stationary. I.e. anysequence (Pk) (where k ∈ N) of submodules Pk ≤m M that satisfies

P0 ⊇ P1 ⊇ . . . ⊇ Pk ⊇ Pk+1 ⊇ . . .

becomes eventually constant - that is there is some s ∈ N such that

P0 ⊆ P1 ⊆ . . . ⊆ Ps = Ps+1 = . . .

(b) Every non-empty P 6= ∅ family P ⊆ subm (M) of submodules of Mcontains a minimal element. I.e. there is some P∗ ≤m M such that

∀ P ≤m M : P ⊆ P∗ =⇒ P = P∗

3.11 Noetherian Modules 213

Page 214: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Prob the equivalence of these statements is no specialty of modules, but ageneral truism for partial orders. In this case the order in question is P ≤ Qdefined by Q ⊆ P . Then this is just an application of (2.28).

(3.69) Remark:The notion of a noetherian (resp. artinian) module is closely related to thenotion of a noetherian (resp. artinian) ring. In fact if we regard a commu-tative ring R as a module over itself, then the ideals a i R are preciselythe submodules a ≤m R of R. Hence by property (a) we see, that Ris noetherian (resp. artinian) as a module, if and only if it is noetherianresp. artinian) as a ring.

(3.70) Example:Clearly Z is a noetherian ring (as even is a PID) and hence a noetherianZ-module. However Z is not artinian, as there is an infinitely descendingchain of ideals, that does not stabilize, given by

Z ⊃ 2Z ⊃ 4Z ⊃ . . . ⊃ 2kZ ⊃ . . .

Conversely pick up any prime p ∈ Z and regard the subring Z[1/p] ≤r Q

of Q generated by 1/p ∈ Q. Clearly Z ≤r Z[1/p] is a subring as well, andhence a submodule. Therefore we may regard the quotient module

I(p) := Z[1/p]/Z

as an Z module. By the way: this is just the p-primary component ofQ/Z, that is I(p) =

x+Z | ∃ k ∈ N : pkx+Z = 0 +Z

. Then I(p) is

not noetherian, as there is an infinitely ascending chain of submodules ofI(p) that does not stabilize

Z ⊂ Z

(1

p+Z

)⊂ Z

(1

p2+Z

)⊂ Z

(1

p3+Z

)⊂ . . .

However it can be shown, that I(p) is a artinian module. Recall that in thecase of commutative rings, any artinian ring already is noetherian. This isnot the case for modules over commutative rings however! And I(p) is justan example for this.

(3.71) Theorem: (viz. 383)Let (R,+, ·) be a commutative ring and M , N be R-modules. We also regardtwo submodules P , Q ≤m M of M . As the following statements will betrue for noetherian and artinian modules alike, let us abbreviate

? ∈ noetherian, artinian

in order to avoid giving two completely analogous lists of theorems. Thenthe following statements hold true:

214 3 Modules

Page 215: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) M is a ? module if and only if both P and the quotient M/P are ?modules. Formally that is

M is ? ⇐⇒ P and M/P are ?

(ii) If both quotient modules M/P and M/Q are ? modules, then thequotient M/(P ∩Q) is a ? module again. Formally that is

M/P and M/

Q are ? =⇒ M/P ∩Q is ?

(iii) If both P and Q are ? modules, then so is their sum P + Q ≤m M .Formally that is the implication

P and Q are ? =⇒ P +Q is ?

(iv) If both M and N are ? modules, then the direct sum M ⊕ N of Mand M is a ? module again and vice versa. Formally that is

M and N are ? ⇐⇒ M ⊕N is ?

(v) If ϕ : M → N is a surjective homorphism of R-modules and M is a ?module, then so is its image N . Formally that is

M N and M is ? =⇒ N is ?

(vi) If M is a finitely generated R-module and R is a ? ring, then M alreadyis a ? module. Formally that is

M finitely generated and R is ? =⇒ M is ?

(vii) If L, M and N are R-modules and 0→ L→M → N → 0 is an exactsequence of R-module homomorphisms, then we get the implication

M is ? ⇐⇒ L and N are ?

(3.72) Corollary: (viz. 391)Let (R,+, ·) be a commutative ring and M be an R-module, then we obtainthe following statements

(i) M is noetherian, if and only if M is finitely generated and R/ann(M)is a noetherian ring, formally that is

M noetherian ⇐⇒ rank(M) <∞ and R/ann(M) noetherian

(ii) Under the assumption that M is finitely generated: M is artinian, ifand only if R/ann(M) is an artinian ring, formally that is

M artinian ⇐⇒ R/ann(M) artinian

3.11 Noetherian Modules 215

Page 216: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) If M is a finitely generated, artinian R-module, then M already is annoetherian R-module, as well. Formally again

M artinian and finitely generated =⇒ M noetherian

(iv) If M is noetherian and ϕ : M → M is an R-module-endomorphism,then if ϕ is surjective, it already is bijective, that is

ϕ is surjective ⇐⇒ ϕ is bijective

Nota in fact we have already proved this statement for any finitelygenerated module M over a commutative ring R in (3.52). Howeverthe proof of this requires much more effort than the proof of thisstatement here.

(v) If M is artinian and ϕ : M →M is an R-module-endomorphism, thenif ϕ is injective, it already is bijective, that is

ϕ is injective ⇐⇒ ϕ is bijective

(vi) (♦) M is both artinian and noetherian, if and only of M is a moduleof finite length, formally that is

M artinian and noetherian ⇐⇒ `(M) <∞

(vii) (♦) If M is a semi-simple R-module (e.g. a vector space over a field),then the following five statements are equivalent:

(a) M is artinian

(b) M is noetherian

(c) M is finitely generated

(d) M has finite length `(M) <∞

(e) M is a finite direct sum of simple R-modules, that is there arefinitely many (i ∈ 1 . . . r) simple submodules Mi ≤m M of Msuch that M = M1 ⊕ · · · ⊕Mr

In case M even is a vector-space, that is R even is a field, then weobtain the following sixth equivalent statement:

(f) M is finite-dimensional dim(M) <∞

(viii) Suppose M is a finitely generated R-module, such that every ascendingchain of ideal-induced submodules of M stabilizes, then M is noethe-rian. That is: suppose for every seqence of ideals (ak) where ak i Rand k ∈ N that satisfies

a0M ⊆ a1M ⊆ a2M ⊆ . . .

there is some s ∈ N such that for any i ∈ N we get as+iM = asM ,then M already is a noetherian R-module.

216 3 Modules

Page 217: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.73) Definition: (viz. 385)Let (R,+, ·) be an arbitrary ring and M be an R-module. Then M is saidto be a simple R-module if M = 0 and M has no proper submodules. Thatis M = 0 and it satisfies one of the following two equivalent properties:

(a) For any submodule P ≤m M of M we either have P = 0 or P = M

(b) Any non-zero element of M generates M as an R-module, that is forany x ∈M with x 6= 0 we already get M = Rx.

And if R is commutative, we obtain the following third equivalent statement

(c) There is some maximal ideal m i R of R such that M is isomorphicto the quotient ring R/m as an R-module

∃ m ∈ smaxR such that M ∼=mR/

m

Let (R,+, ·) be an arbitrary ring and M be an R-module. Then M is saidto be semi-simple, iff M can be decomposed into a direct sum of simplesubmodules. That is there is an arbitrary family Mi ≤m M of submodules(where i ∈ I), such that Mi is a simple R-module (for any i ∈ I) and M isthe inner direct sum of the Mi

M =⊕i∈I

Mi

(3.74) Definition:Let (R,+, ·) be a commutative ring and M be an R-module. Then the tuple(M0,M1, . . . ,Mr) is said to be a composition series of M , iff the Mi forman ascending chain of submodules Mi ≤m M (for any i ∈ 0 . . . r)

0 = M0 ⊂ M1 ⊂ . . . ⊂ Mr = M

such that for any i ∈ 1 . . . r the quotient module Mi/Mi−1 is simple. And by(3.73) this can be put as: for any i ∈ 1 . . . r and any submodule P ≤m Mof M we get the following implication

Mi−1 ⊆ P ⊆ Mi =⇒ P = Mi−1 or P = Mi

If M admits a composition series (M0,M1, . . . ,Mr) then M is said to be offinite length. And we will see in (3.76) that in this case the length r of theseries is uniquely determined. Hence this number will be called the lengthof the module M , denoted by

`(M) := r ∈ N

And in case M admits no composition series we say that M is of infinitelength and we denote this situation, by writing

`(M) := ∞

3.11 Noetherian Modules 217

Page 218: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.75) Remark:

(i) If `(M) = 0 then 0 = M0 = M , that is M is the zero-module. Andclearly if M = 0 is zero, then (0) is the only available compositionseries. Hence there is the equivalency

`(M) = 0 ⇐⇒ M = 0

(ii) If `(M) = 0 then 0 = M0 ⊂ M1 = M , that is M = M/0 = M1/M0 isa simple R-module. Conversely if M is simple then (0,M) already isthe one and only composition series of M , such that

`(M) = 0 ⇐⇒ M is simple

(iii) We will oftenly use the following trick: if N is a simple R-module andι : M → N is an R-module monomorphism, then already M = 0 orι : M ∼→N even is an isomorphism.

Prob let Q := im(ι) ≤m N be the image of M in N . Then, as N issimple, we either have Q = 0 or Q = N . But as ι is injective im(ι) = 0implies M = ι−1(0) = 0 and im(ι) = N means that ι is surjective andhence an isomorphism ι : M ∼→N .

(iv) Another trick is the following observation: if M is a simple R-moduleand % : M N is an R-module-epimorphism, then already N = 0 or% : M ∼→N even is an isomorphism.

Prob let P := kn(%) ≤m M be the kernel of % in M . Then, as M issimple, we either have P = 0 or P = M . But kn(%) = 0 implies that% is injective (and hence an isomorphism % : M ∼→N) and kn(%) = Mimplies N = im(%) = 0.

(v) If V is a finite-dimensional vectorspace over the field E, then the di-mension of V is precisely the length of V , that is

`(V ) = dim(V )

Prob let v1, . . . , vn ⊆ V be a basis of V , then we denote thesubspaces Vi := lh(v1, . . . , vi) ≤v V for i ∈ 0 . . . n. Then we seeE ∼=v Vi/Vi−1 : a 7→ avi + Vi−1 such that Vi/Vi−1 is simple. Hence(V0, V1, . . . , Vn) is a composition series of V and in particular `(V ) = n.

(vii) If V is a vector-space over the skew-field (S,+, ·) and x ∈ V is any non-zero element x 6= 0 of V . Then clearly Sx ⊆ V is a simple submoduleof V . In particular V already is semi-simple. To see this simply choosea basis B ⊆ V of V , then V can be decomposed, into

V =⊕b∈V

Sb

218 3 Modules

Page 219: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3.76) Theorem: (viz. 385) Jordan-Holder TheoremLet (R,+, ·) be a commutative ring and M be an R-module of finite length`(M) <∞. Then we obtain the following statements

(i) If P ≤m M is a submodule of M with P 6= M , then the length of Mexceeds the length of P , formally that is

`(P ) < `(M)

(ii) If M is an arbitrary R-module and P ≤ M is any submodule of Mthen M has finite length if and only if both P and the quotient M/Pare of finite length, formally that is

M has finite length ⇐⇒ P and M/P have finite length

And in this case we even obtain the following equality for the lengthof the modules involved

`(M) = `(P ) + `(M/

P

)(iii) Any two composition series of M are equivalent : That is, given any

two composition series (M0,M1, . . . ,Mr) and (N0, N1, . . . , Ns) of M ,then we get r = s and there is a permutation σ ∈ Sr such that for anyi ∈ 1 . . . r there is an isomorphy

Mi/Mi−1

∼=mNσ(i)

/Nσ(i)−1

(iv) Let us denote the set of all strictly increasing chains of submodules ofM , that start in 0 and terminate in M by C(0,M), that is

C(0,M) :=

(P0, P1, . . . , Pr)

∀ i ∈ 0 . . . r : Pi ≤m M0 = P0 ⊂ P1 ⊂ . . . ⊂ Pr = M

Then we obtain a partial order ≤ on C(0,M) by virtue of the followingdefinition: given two chains (P0, P1, . . . , Pr) and (Q0, Q1, . . . , Qs) wesay that (P0, P1, . . . , Pr) ≤ (Q0, Q1, . . . , Qs) iff r ≤ s and there is aninjective map α : 0 . . . r → 0 . . . s with α(0) < α(1) < · · · < α(r) suchthat for any i ∈ 0 . . . r we get

Qα(i) = Pi

And thereby, if (M0,M1, . . . ,Mr) ∈ C(0,M) is a series of submodulesof M the following statements are equivalent

(a) r = `(M)

(b) (M0,M1, . . . ,Mr) is maximal

(c) (M0,M1, . . . ,Mr) is a composition series

3.11 Noetherian Modules 219

Page 220: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) Any series of submodules of M can be refined into a compositionseries: That is, if P0 ⊂ P1 ⊂ . . . ⊂ Pk is a strictly increasingchain of submodules Pi ≤m M , then there is a composition series(M0,M1, . . . ,Mr) of M such that (in the sense of (iv) above) we get

(P0, P1, . . . , Pk) ≤ (M0,M1, . . . ,Mr)

(vi) Consider a finite sequence of R-modules Mi where i ∈ 0 . . . n. That isfor any i ∈ 0 . . . n we have ϕi+1ϕi = 0

0ϕ0−→ M0

ϕ1−→ M1ϕ2−→ . . .

ϕn−→Mnϕn+1−→ 0

If for any i ∈ 0 . . . n the module Mi is of finite length and Hi denotesthe i-th homology module Hi := kn(ϕi+1)/im(ϕi) then we get

n∑i=0

(−1)i`(Mi) =n∑i=0

(−1)i`(Hi)

220 3 Modules

Page 221: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

3.12 Localisation of Modules

We have already studied the localisation of rings in section 2.9, now we wouldlike to transfer this concept to modules. It turns out that the localisation ofmodules combines neatly with the localisation of modules. Also localisationwill preserve all the nice properites a module might have. Hence localisationof modules is as powerful as the localisation of rings and almost as useful.

Recall definition (2.93) which introduces the notion of a multiplicativelyclosed subset U ⊆ R of a commutative ring (R,+, ·). This is a subset U ,such that 1 ∈ U and for any u, v ∈ U we have uv ∈ U , as well. We havealready studied these sets in detail in section 2.9, so the reader is asked torefer to this section, in case of doubt. Let us now redo the construction oflocalisation for modules in precisely the same manner we used for rings:

(3.77) Definition: (viz. 222)Let (R,+, ·) be any commutative ring, U ⊆ R be a multiplicatively closedsubset and M be an R-module. Then we obtain an equivalence relation ∼on the set U ×M by virtue of (with x, y ∈M and u, v ∈ U)

(u, x) ∼ (v, y) :⇐⇒ ∃w ∈ U : vwx = uwy

And we denote the quotient of U ×M modulo ∼ by U−1M and the equiv-alence class of (u, x) is denoted by x/u, formally this is

x

u:= (v, y) ∈ U ×M | (u, x) ∼ (v, y)

U−1M := U ×M/∼

Now U−1M becomes an R-module again, even an U−1R-module, under thefollowing addition and scalar multiplications (where a ∈ R)

x

u+y

v:=

vx+ uy

uv

ax

u:=

ax

ua

u

y

v:=

ay

uv

It is clear that under these operations the zero-element of U−1M is givento be 0/1 and we obtain a canonical R-module homomorphism from M toU−1M by letting

κ : M → U−1M : x 7→ x

1

In general this canonical homomorphism need not be and embedding (i.e. in-jective). Yet we do obtain the following implications

κ injective ⇐⇒ U ⊆ nzdR

κ bijective =⇒ U ⊆ R∗

3.12 Localisation of Modules 221

Page 222: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (3.77):

• We will first prove that ∼ is an equivalence relation: the reflexivity isdue to 1 ∈ U , as u1a = u1a implies (u, a) ∼ (u, a). For the symmetrysuppose (u, a) ∼ (v, b), i.e. there is some w ∈ U such that vwa =uwb. As = is symmetric we get uwb = vwa and this is (v, b) ∼ (u, a)again. To finally prove the transitivity we regard (u, a) ∼ (v, b) and(v, b) ∼ (w, c). That is there are p, q ∈ U such that pva = pub andqwb = qvc. Since p, q and v ∈ U we also have pqv ∈ U . Now compute(pqv)wa = qw(pva) = qw(pub) = pu(qwb) = pu(qvc) = (pqv)uc, thisyields that truly (u, a) ∼ (w, c) are equivalent, too.

• Next we need to check that the operations + and · are well-defined.Suppose a/u = a′/u′ and b/v = b′/v′ that is there are p and q ∈ Usuch that pu′a = pua′ and qv′b = qvb. Then we let w := pq ∈ U andcompute

u′v′w(av + bu) = vv′q(pu′a) + uu′p(qv′b)

= vv′q(pua′) + uu′p(qvb′)

= uvw(a′v′ + u′b′)

u′v′wab = (pu′a)(qv′b)

= (pua′)(qvb′)

= uvwa′b′

That is (a/u) + (b/v) = (av + bu)/uv = (a′v′ + b′u′)/u′v′ = (a′/u′) +(b′/v′) and (a/u)(b/v) = ab/uv = a′b′/u′v′ = (a′/u′)(b′/v′) respec-tively. And this is just the well-definedness of sum and product.

• Next it would be due to verify that U−1R thereby becomes a com-mutative ring. That is we have to check that + and · are associative,commutative and satisfy the distributivity law. Further that for anya/u ∈ U−1R we have a/u + 0/1 = a/u and a/u + (−a)/u = 0/1 andfinally that a/u·1/1 = a/u. But this can all be done in straightforwardcomputations, that we wish to omit here.

• It is obvious that thereby κ becomes a homomorphism of rings, κ(0) =0/1 is the zero-element and κ(1) = 1/1 is the unit element. Furtherκ(a + b) = (a + b)/1 = (a1 + b1)/1 · 1 = a/1 + b/1 = κ(a) + κ(b) andκ(ab) = ab/1 = ab/1 · 1 = (a/1)(b/1) = κ(a)κ(b).

• Let us now prove that κ is injective if and only if U ⊆ nzdR: notethat κ is injective, iff kn(κ) = 0 that is if x/1 = 0/1 implies thatx = 0. But by definition x/1 = 0/1 means wx = w0 = 0 for somew ∈ U . That is κ is injective if and only if U and the set R\nzd(M) = a ∈ R | ∃x ∈M such that x 6= 0 and ax = 0 are disjoint. In otherwords that is U ⊆ nzd(M).

• Let us finally assume that U ⊆ R∗. Then (as M 6= 0) it is clear thatU ⊆ nzd(M) such that κ already is injective. It remains to prove thesurjectivity. Hence consider any y/u ∈ U−1M . Then we chose w = 1and x = u−1y from which we find wux = uu−1y = y = wy such thatκ(x) = x/1 = y/u. Together we derived the bijectivity of κ.

222 3 Modules

Page 223: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

3.12 Localisation of Modules 223

Page 224: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 4

Linear Algebra

4.1 Matix Algebra

Martices serve two purposes: first of all they can represent systems of linearequations. And secondly (by fixing ordered bases) they can be used torepresent homorphisms between free modules. Using matrices these conceptsare interlocked: the set of solutions is the kernel of the homorphism (resp. anaffine shift of the kernel in case of an inmomogeneous equation).

Also matrices provide a method of keeping track of the manipulationsof a system of linear equations - the elementary matrices. Hence the algo-rithm how to solve a system of linear equations (Gauss-Algorithm) can beexpressed in terms of matrices. This ultimately leads to a criterion whethera system of linear equations is (unambiguously) solvable - the determinant.

So, as a first step, let us introduce the notion of a matrix and define theoperations on matrices that will be used later on.

(4.1) Definition:Let (R,+, ·) be any ring and fix a natural number 1 ≤ n ∈ N. As before wewill concern ourselves with the set Rn of n-tuples over R

Rn := (x1, . . . , xn) | x1, . . . , xn ∈ R

In linear algebra it is customary (and occasionally useful) to write the n-tuple x = (x1, . . . , xn) in vertical order as well

(x1, . . . , xn) =

x1...xn

We already turned Rn into an R-algebra by establishing the pointwise op-erations on Rn, let us recall this: given any x = (x1, . . . , xn) and y =(y1, . . . , yn) ∈ Rn and any a ∈ R we have defined the operations:

(x1, . . . , xn) + (y1, . . . , yn) := (x1 + y1, . . . , xn + yn)

(x1, . . . , xn) · (y1, . . . , yn) := (x1 · y1, . . . , xn · yn)

a(x1, . . . , xn) := (ax1, . . . , axn)

224

Page 225: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

As a new operation on Rn we define the scalar product (do not confusethis with the scalar multiplication) Rn × Rn → R : (x, y) 7→ 〈x, y〉 which isdefined to be (as always x = (x1, . . . , xn) and y = (y1, . . . , yn))

〈x, y〉 :=n∑i=1

xi yi

Finally we need to fix an easy notation for the canonical projections πi ofthe product Rn that is πi : Rn → R : x 7→ xi. Recall that the πi thereby areR-algebra homomorphisms. Given any i ∈ 1 . . . n we denote

[x]i := xi where x = (x1, . . . , xn)

(4.2) Remark:

(i) There are a multitude of ways how to embed R → Rn as a subalgebra.The most reasonable one (due to its symmetry) is to interpret a ∈ R asthe tuple (a) = (a, . . . , a) ∈ Rn. The advantage of this is evident: themultiplication then coincides with the scalar multiplication: (a) · x =ax. Hence we fix

R → Rn : a 7→ (a) := (a, . . . , a)

(ii) Trivially Rn is a free module, having E = e1, . . . , en the EuclideanR-basis. Thereby the elements ei ∈ Rn are given to be (refer to (3.38)for details)

e1 := (1, 0, 0, . . . , 0, 0)

e2 := (0, 1, 0, . . . , 0, 0)

...

en := (0, 0, 0, . . . , 0, 1)

(iii) Clearly the scalar product on Rn has some neat computational prop-erties: in fact it is a bilinear form of Rn, that is for any x, y, z ∈ Rnand any a ∈ R we get

〈ax+ y, z〉 = a〈x, z〉+ 〈x, z〉 R-linearity (in the 1st argument)

〈x, ay + z〉 = a〈x, y〉+ 〈x, z〉 R-linearity (in the 2nd argument)

And if R even is a commutative ring, then the scalar product even isa symmetric bilinear form, that is in this case we get a third property:

〈x, y〉 = 〈y, x〉

Prob all these properties are straightforward applications of the axiomsof rings in conjunction with the pointwise operations: 〈ax + y, z〉 =∑

i(axi + yi)zi = a∑

i(axiyi + xizi) = a∑

i xiyi +∑

i xizi = a〈x, y〉+〈x, z〉. And the R-linearity in the second argument can be proved

4.1 Matix Algebra 225

Page 226: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

in complete analogy. If R is commutative then the symmetry of thescalar product is evident: 〈x, y〉 =

∑i xiyi =

∑i yixi = 〈y, x〉.

(iv) Note that in general there is a difference between the scalar product〈x, y〉 and an inner product 〈x | y〉. The latter is defined in the case ofR = R or R = C only. And in the case R = C there is a subtle distinc-tion between the scalar and inner products - the inner products usescomplex conjugates. However this difference has dire consequences:the scalar product over C is not positively definite, in fact we see that

〈(

1i

),

(1i

)〉 = 12 + i2 = 1− 1 = 0

(4.3) Definition:Let (R,+, ·) be any ring again and fix two natural numbers 1 ≤ m, n ∈ N.Then a (m × n)-matrix is defined to be an n-tuple of m-tuples over R.Formally we will denote this set, by

matm,n(R) := (Rm)n

A matrix usually is written as an m times n block of elements of R. That isif we have the m-tuples aj = (a1,j , . . . , am,j) ∈ Rm and j ∈ 1 . . . n then wewrite the matrix A = (a1, . . . , an) in the following form

a1,1 a1,2 · · · a1,n

a2,1 a2,2 · · · a2,n...

......

am,1 am,2 · · · am,n

:= (ai,j) := A

That is the second index j of A = (ai,j) designates the jth m-tuple aj of Aand this is written as the jth column of A. Complementary the first index iof A = (ai,j) designates the ith component within the m-tuple aj and theseare written as the ith row of A.

To introduce a formal notation let us fix the (m × n)-matrix A = (ai,j) asabove. Then - for any i ∈ 1 . . . n and any j ∈ 1 . . . n - we denote the (i, j)thentry of A, the ith row of A and the jth column of A respectively by

enti,j(A) := ai,j ∈ R

rowi(A) := (ai,1, . . . , ai,n) ∈ Rn

colj(A) := (a1,j , . . . , am,j) ∈ Rm

The transpose A∗ of an (m × n)-matrix A = (ai,j) is defined to be the(n×m)-matrix in which colums and rows are interchanged, that is

A∗ :=

a1,1 a2,1 · · · am,1a1,2 a2,2 · · · am,2

......

...a1,n a2,n · · · am,n

∈ matn,m(R)

226 4 Linear Algebra

Page 227: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

If m = n then the rectangular block A of elements becomes a square. Asthis is an important special case we abbreviate our notation somewhat:

matn(R) := matn,n(R) = (Rn)n

A special case of such a matrix is a diagonal matrix. That is a squarematrix A = (ai,j) in which all entries off the diagonal are zero, i.e. ai,j = 0for any i 6= j ∈ 1 . . . n. And if we are given n-elements a1, . . . , an ∈ R thenthe diagonal matrix with the entries ai on its diagonal is denoted by

dia(a1, . . . , an) :=

a1 0 · · · 0 00 a2 0 0...

. . ....

0 0 an−1 00 0 0 an

∈ matn(R)

(4.4) Remark:

(i) By definition of a (m× n)-matrix, every n m-tuples aj = (ai,j) ∈ Rm(where j ∈ 1 . . . n) determine a matrix A := (a1, . . . , an). To stressthat the aj build up the columns of A we writea1 · · · an

:= (ai,j)

Conversely, if we are given m n-tuples ai = (ai,j) ∈ Rn (where i ∈1 . . .m) we can build up a (m× n)-matrix A = (ai,j) by taking the aito be the rows of A. In this case we write a1

...am

:= (ai,j)

(ii) Recall that an (m × n)-matrix A = (ai,j) is defined to be an n-tupleof m-tuples of R. And the (i, j)th entry ai,j of A is the ith componentof the jth component of A. So in a somewhat formal way that is

[A]i,j := ai,j = [[A]j ]i

(4.5) Definition:Let (R,+, ·) be any ring again and consider the natural numbers 1 ≤ m andn ∈ N. By construction the set matm,n(R) of (m× n)-matrices of R formsan R-module under the pointwise operations. That is: consider a ∈ R andtwo (m × n)-matrices A = (ai,j) and B = (bi,j) ∈ matm,n(R). Then wedefine the addition and scalar-multiplication

A+B := (ai,j + bi,j)

aB := (a bi,j)

4.1 Matix Algebra 227

Page 228: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now consider another natural number 1 ≤ l ∈ N and suppose that A ∈matl,m(R) is a (l × m)-matrix, whereas B ∈ matm,n(R) is an (m × n)-matrix. Then we can define the matrix-multiplication AB of A and B tobe the following (l × n)-matrix

AB := (〈rowi(A), colk(B)〉) =

m∑j=1

ai,j bj,k

∈ matl,n(R)

A special case of this is the matrix-vector multiplication: given an (m×n)-matrix A ∈ matm,n(R) and an n-tuple x ∈ Rn over R we obtain an m-tupleAx ∈ Rn by virtue of

Ax := (〈row1(A), x〉, . . . , 〈rowm(A), x〉) ∈ Rm

(4.6) Remark:

(i) Let us reformulate the above definitions of addition, scalar multiplica-tion and vector multiplication in terms of the entries of the resultingmatrices. That is we take A, B ∈ matm,n(R), a ∈ R and x ∈ Rn.Then for any i ∈ 1 . . .m and any j ∈ 1 . . . n we have

enti,j(aA+B) = a enti,j(A) + enti,j(B)

[Ax]i =n∑j=1

enti,j(A) [x]j

And if A ∈ matl,m(R) and B ∈ matm,n(R) we can also reformulatethe matrix multiplication for all the entries i ∈ 1 . . . l and k ∈ 1 . . . n

enti,k(AB) =m∑j=1

enti,j(A) entj,k(B)

(ii) Note that the matrix-matrix multiplicationAB is nothing but a matrix-vector multiplication Abk for all the column vectors bk of B. That isgiven A ∈ matl,m(R) and bk ∈ Rm (where k ∈ 1 . . . n) we have

A

b1 · · · bm

=

Ab1 · · · Abm

(iii) Let us also reformulate the transposition A 7→ A∗ of matrixes in terms

of the rows, columns and entries. Then it is clear that for any A ∈matm,n(R) and any i ∈ 1 . . .m, j ∈ 1 . . . n we get

entj,i(A∗) = enti,j(A)

coli(A∗) = rowi(A)

rowj(A∗) = colj(A)

Prob recall that entj,i(A∗) = enti,j(A) is just the definition of the trans-

228 4 Linear Algebra

Page 229: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

pose A∗. And thus we find coli(A∗) = (ent1,i(A

∗), . . . , entn,i(A∗)) =

(enti,1(A), . . . , enti,n(A)) = rowi(A). Likewise we compute rowj(A∗) =

(entj,1(A∗), . . . , entj,m(A∗)) = (ent1,j(A), . . . , entm,j(A)) = colj(A).

4.1 Matix Algebra 229

Page 230: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(4.7) Proposition: (viz. 311)Consider any ring (R,+, ·) and fix the natural numbers 1 ≤ l, m, n, p andq ∈ N. Then all of the following statements hold true

(i) The set matm,n(R) of (m× n)-matrices over R is an R-module underthe addition and scalar multiplication of matrices, as defined in (4.5).The zero-element thereby is the zero-matrix

0 = (0) =

0 0 · · · 00 0 · · · 0...

......

0 0 · · · 0

In fact matm,n(R) is a free R-module of rank mn. We can easilyprovide the Euclidean R-basis E := Er,s | r ∈ 1 . . .m, s ∈ 1 . . . n ofmatm,n(R). Thereby Er,s is the (m × n) matrix that is entirely 0,except its (r, s)th entry, which is 1. Formally

Er,s := (δi,r δj,s) ∈ matm,n(R)

(ii) The multiplication of matrices is associative and distributive (but ingeneral not commutative). Suppose that A, B and C are of compatiblesizes, then

(AB)C = A(BC)

A(B + C) = AB +AC

(A+B)C = AC +BC

To be of compatible sizes means A ∈ matm,n(R), B ∈ matn,p(R)and C ∈ matp,q(R) for the associativity, A ∈ matl,m(R), B and C ∈matm,n(R) for the left-distributivity and A, B ∈ matl,m(R) and C ∈matm,n(R) for the right-distributivity.

(iii) In particular the set matn(R) of (n×n)-matrices over R is an R-algebraunder the addition, scalar-multiplication and matrix-multiplication ofmatrices as defined in (4.5). Its unit-element is the identity-matrix

11 = (δi,j) =

1 0 · · · 00 1 0...

. . .

0 0 1

(iv) Recall the definition of the center cen(R) of a ring R to be the subring

of all elements that commute with all other elements, that is cen(R) = a ∈ R | ∀ b ∈ R : ab = ba . Then the center of matn(R) is just thecenter of R embedded as diagonal matrices:

cen (matn(R)) = cen(R) 11 =

a 0 · · · 00 a 0...

. . .

0 0 a

∣∣∣∣∣∣∣∣∣ a ∈ cen(R)

230 4 Linear Algebra

Page 231: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) The transposition A 7→ A∗ of matrices is an isomorphism of R-modulesmatm,n(R) ∼→matn,m(R). In particular for any a ∈ R and any A,B ∈ matm,n(R) we have

(aA+B)∗ = aA∗ +B∗

(vi) If R is commutative, then the transposition of matrices swaps the orderof matrix-multiplications. That is given any matrices A ∈ matl,m(R)and B ∈ matm,n(R) we find that

(AB)∗ = B∗A∗

(vii) If R is commutative then taking inverses in matn(R) commutes withthe transposition. To be precise: if A ∈ matn(R) is a square matrix,then A is invertible, if and only if A∗ is invertible. And in this casethe inverse of A∗ is given to be

(A∗)−1 =(A−1

)∗(viii) If R is a commutative ring, then for any vectors x ∈ Rn, y ∈ Rm and

matrix A ∈ matm,n(R) we get the following identity in R

〈Ax, y〉 = 〈x,A∗y〉

4.1 Matix Algebra 231

Page 232: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

4.2 Elementary Matrices

(4.8) Definition:Let (R,+, ·) be any ring and 1 ≤ n ∈ N. Then we will define the set gln(R)of invertible matrices, the set trin(R) of (upper) triangular matrices andthe set dian(R) of diagonal matrices to be the following subsets of matn(R)

gln(R) := A ∈ matn(R) | ∃B ∈ matn(R) : AB = 11 n = BA trin(R) := A ∈ matn(R) | ∀ 1 ≤ i < j ≤ n : enti,j(A) = 0

dian(R) :=

a1 0 · · · 00 a2 0...

. . .

0 0 an

∣∣∣∣∣∣∣∣∣ a1, . . . , an ∈ R

(4.9) Proposition: (viz. 314)Let (R,+, ·) be any ring and 1 ≤ n ∈ N. Then the invertible matrices gln(R)is precisely the multiplicative group of the ring matR. In particular gln(R)is a (non-commutative) group under the multiplication of matrices

gln(R) = (matn(R))∗

And the upper trianguar matrices form a subring of the ring matn(R), inparticular trin(R) is closed under the addition and multiplication of matrices

trin(R) ≤r matn(R)

(4.10) Definition:Let (R,+, ·) be any ring and 1 ≤ n ∈ N. Then we will introduce a numberof special (n × n)-matrices. First of all, for any r, s ∈ 1 . . . n we recall thedefinition of the Euclidean matrices

Er,s := (δi,r δj,s) ∈ matn(R)

And is a1, . . . , an ∈ R are any scalars we can thereby introduce the diagonalmatrix having the entries ai to be the following

dia(a1, . . . , an) :=n∑i=1

aiEi,i =

a1 0 · · · 00 a2 0...

. . .

0 0 an

Another important matrix will simply be denoted by Nn, it is an uppertriangular matrix whose only non-zero entries are 1s, one step above the

232 4 Linear Algebra

Page 233: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

diagonal

Nn := (δi+1,j) =

0 1

0. . .. . . 1

0

∈ trin(R)

Now, if a ∈ R and i, j ∈ 1 . . . n, we define a dilatation matrix Di(a) and atranslocation matrix Ti,j(a) to be (n× n)-matrices of the following form

Ti,j(a) := 11 n + aEi,j

Di(a) := 11 n + (a− 1)Ei,i

Consider any i, j ∈ 1 . . . n again, then we define a transposition matrixPi,j to to be an (n× n)-matrices of the following form

Pi,j = 11 n − Ei,i − Ek,k + Ei,k + Ek,i

More generally, let us denote the Euclidean basis of Rn by ej = (δi,j)again. That is e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0) and so on untilen = (0, . . . , 0, 1). Then for any function σ : 1 . . . n → 1 . . . n we definepermutation matrix Pσ of σ to be the following

Pσ := (δσ(i),j) =

eσ(1)...

eσ(n)

∈ matn(R)

The set of elementary matrices finally consists of all dilatation matri-ces Di(a), all translocation matrices Ti,j(a) for i 6= j and all transpositionmatrices Pi,j with i 6= j. It will be denoted by

elmn(R) := Di(a) | i ∈ 1 . . . n, a ∈ R ∪ Ti,j(a) | i 6= j ∈ 1 . . . n, a ∈ R ∪ Pi,j | i 6= j ∈ 1 . . . n

(4.11) Remark:The above definitions are formally correct but some of them are a little bitobfuscated. Hence we wish to enlighten them by rewriting the respectiveformulas in matrix notation:

• First of all the dilatation matrix Di(a) is nothing but a diagonal matrix- start with the identity matrix 11 n but replace the (i, i)-th coefficientwith a. That is

Di(a) = dia( 1, . . . , a, . . . , 1︸ ︷︷ ︸a at i-th position

) =

1

. . .

a. . .

1

4.2 Elementary Matrices 233

Page 234: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Next - for i 6= j - the transposition matrix Ti,j(a) is just the identitymatrix 11 n with the (i, j)-th entry being a instead of 0. That is

Ti,j(a) =

1. . .

1 a. . .

1. . .

1

• By definition the transposition matrix Pi,j is the identity matrix in

which the (i, i)-th entry 1 is moved to (j, i) and the (j, j)-the entry 1is moved to (i, j). In that we find that Pi,j is the permutation matrixthat belongs to (i j), i.e. the permutation that interchanges i and j

Pi,j = P(i j) =

1. . .

0 1. . .

1 0. . .

1

(4.12) Proposition: (viz. 314)Let (R,+, ·) be any ring and 1 ≤ m, n ∈ N. Then we obtain the followingformulae when multiplying one of the above matrices with another matrix

(i) For any r, s ∈ 1 . . . let Er,s ∈ matn(R) denote the respective Euclideanmatrix. Then for any (m×n)-matrix A ∈ matm,n(R) and any (n×m)-matrix B ∈ matn,m(R) we get

colk(AEr,s) =

colr(A) if k = s

0 if k 6= s

rowi(Er,sB) =

rows(B) if i = r

0 if i 6= r

(ii) Consider the square matrix A = (ai,j) ∈ matn(R), then for any fourindices i, j, k and l ∈ 1 . . . n we have the following identity

Ei,j AEk,l = aj,k Ei,l

Using this equation on the identity matrix A = 11 n = (δi,j) we also get

Ei,j Ek,l = δj,k Ei,l

(iii) For any r, s ∈ 1 . . . let Er,s ∈ matn(R) denote the respective Euclideanmatrix again. Also let a ∈ R, A ∈ matm,n(R) and B ∈ matn,m(R).

234 4 Linear Algebra

Page 235: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Multiplying with the translocation matrix Tr,s(a) from the left addsa-times the sth row to the rth row of B. Likewise, multiplying withTr,s(a) from the right adds a-times the rth column to the sth columnof A. That is for any i, j ∈ 1 . . . n we get

colk(ATr,s(a)) =

colk(A) + colr(A) a if k = s

colk(A) if k 6= s

rowi(Tr,s(a)B) =

rowi(B) + a rows(B) if i = r

rowi(B) if i 6= r

4.2 Elementary Matrices 235

Page 236: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) Consider any a1, . . . , an ∈ R and any x1, . . . , xn ∈ Rm, then the mul-tiplication the diagonal matrix of the aj is nothing but the scalarmultiplication with these elements, formally

dia(a1, . . . , an)

x1...xn

=

a1x1...

anxn

x1 · · · xn

dia(a1, . . . , an) =

x1a1 · · · xnan

(v) Multiplying Nn from the right shifts the rows up one step, multiplying

with Nn from the left shifts the columns right one step. Formally put:consider any x1, . . . , xn ∈ Rm then we get

Nn

x1

x2...xn

=

x2...xn0

x1 x2 · · · xn

Nn =

0 x1 · · · xn−1

(vi) If x1, . . . , xn ∈ Rm again and σ : 1 . . . n → 1 . . . n is an arbitrary

function, then multiplying with the permutation matrix Pσ from theleft permutes the rows with σ, formally

x1...xn

=

xσ(1)...

xσ(n)

Now suppose, that σ : 1 . . . n → 1 . . . n even is bijective. Then multi-plying with the permutation matrix Pσ from the right permutes thecolumns with σ−1, formallyx1 · · · xn

Pσ =

xσ−1(1) · · · xσ−1(m)

(4.13) Corollary: (viz. 316)Let (R,+, ·) be any ring and 1 ≤ n ∈ N, then we summarize our resultsconcerning the inverse matrices of the various elementary matrices

(i) The set of diagonal matrices is a R-subalgebra of matn(R), that isisomorphic to the R-algebra Rn. The isomorphism thereby is given by

Rn ∼→ dian(R) : (a1, . . . , an) 7→ dia(a1, . . . , an)

In particular the matrix dia(a1, . . . , an) is invertible if and only if allthe ai are invertible in R, i.e. ai ∈ R∗. And in this case the inverse

236 4 Linear Algebra

Page 237: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

matrix is given to be

dia(a1, . . . , an)−1 = dia(a−11 , . . . , a−1

n )

(ii) In particular for any i ∈ 1 . . . n and any a, b ∈ R we have the identity

Di(a)Di(b) = Di(ab)

And Di(a) is invertible if and only if a ∈ R∗ is a unit in R. And inthis case the inverse of Di(a) is given to be

Di(a)−1 = Di

(a−1)

(iii) Consider any i 6= j ∈ 1 . . . n and any a, b ∈ R. Then the matrixmultiplication of Ti,j(a) and Ti,j(b) simply is

Ti,j(a)Ti,j(b) = Ti,j(a+ b)

In particular any transposition matrix Ti,j(a) with i 6= j is invertibleand the inverse matrix of Ti,j(a) is given to be

Ti,j(a)−1 = Ti,j(−a)

(iv) Let us denote the identity map by id : 1 . . . n → 1 . . . n : i 7→ i. Thenclearly the permutation matrix of id is the identity matrix Pid = 11 n.And if σ and τ : 1 . . . n→ 1 . . . n are any two functions on 1 . . . n, then

Pσ Pτ = Pσ τ

In particular the permutation matrix Pσ is invertible, if and only if σis bijective, and in this case the inverse is given to be

P−1σ = Pσ−1

4.2 Elementary Matrices 237

Page 238: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

4.3 Linear Equations

A system of equations is a (finite) collection of equations for severalvariables x1, . . . , xn over some ring R. That is we are given some functionsfi : Rn → R and some bi ∈ R (where i ∈ 1 . . .m) and consider the equations

f1(x1, . . . , xn) = b1

f2(x1, . . . , xn) = b2...

......

fm(x1, . . . , xn) = bm

A solution of this system of equations is an n-tuple (x1, . . . , xn) ∈ Rn thatsatisfies the equations above. The set of solutions us sometimes denoted by

L = x ∈ Rn | ∀ i ∈ 1 . . .m : fi(x) = bi

However the question remains how to solve such a system of equations. Inthis generality the problem is absolutely unsolvable. In fact very little canbe said for sure: in the case R = R or R = C there is the local inversiontheorem which (under certain conditions) guarantees the existence of a solu-tion. To truly find one there are a multitude of numerical methods (e.g. thenewtonian algorithm). If all the fi are polynomial and R is a(n algebraicallyclosed) field, then the vast theory of algebraic geometry studies the structureof the set of solutions.

The one and only case in which a complete theory of solving a system ofequations exists is the case of linear equations. Thereby the system is saidto be linear, iff all the fi are multi-linear, i.e. any xk 7→ fi(. . . , xk, . . . ) isR-linear. In this case the system of equations is of the form

a1,1x1+ . . . +a1,nxn = b1a2,1x1+ . . . +a2,nxn = b2

...... =

...am,1x1+ . . . +am,nxn = bm

for some fixed coefficients ai,k ∈ R and bi ∈ R (where i ∈ 1 . . .m andk ∈ 1 . . . n). Note that matrices provide a very compact notation for linearsystems of equations. Just let A = (ai,k) ∈ matn(R) and b = (b1, . . . , bm) ∈Rm, then the above system boils down to s single equation for vectors (thatis finite tuples over R)

Ax = b

Still a notation does not provide an algorithm of solving the equations. Nowthis is the purpose of this section and we start by giving an example:

(1) 2x1 + 2x2 − 3x3 + 2x4 = 8(2) 2x1 + 4x2 + 8x3 − 4x4 = 6(3) −3x1 − 4x2 + x4 = 1

This is a (3 × 4)-system and we seek all 4-tuples (x1, x2, x3, x4) satisfyingall 3 equations above. The problem is that these equations are interlocked

238 4 Linear Algebra

Page 239: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

with each other, so let us try to disentangle these equations. We see thatsubtracting equation (1) from equation (2) the coefficient of x1 vanishes, solet’s try (2)− (1)→ (2). With equation (3) the case is not so simple, but itworks out fine with 3(1) + 2(3)→ (3)

(1) 2x1 + 2x2 − 3x3 + 2x4 = 8(2) 2x2 + 11x3 − 6x4 = −2(3) −2x2 − 9x3 + 8x4 = 26

So we have reduced the problem by the variable x1 at the cost of equation(1) - if we continue using (1), then x1 will reappear. So we restrict ourselvesto (2) and (3). To eliminate x2 let us add (2) and (3), i.e. (3) + (2)→ (3)

(1) 2x1 + 2x2 − 3x3 + 2x4 = 8(2) 2x2 + 11x3 − 6x4 = −2(3) 2x3 + 2x4 = 24

Let us suppose R = Q, then dividing by 2 we found x3 + x4 = 12 - this isa multitude of solutions, every x4 gives rise to another x3. Let us denotes := x4, then x3 = 12− s due to equation (3). But by equation (2) this nowmeans 2x2 = −2−11(12−s)+6s = −134+17s and hence x2 = −67+17s/2.This can be inserted into equation (1) 2x1 = 8−(−134+17s)+3(12−s)+2s =178−22s and hence x1 = 89−11s. Altogether any solution x of this systemof equations is of the form

x1

x2

x3

x3

89−67120

+Q

−1117/2−11

(4.14) Proposition: (viz. 359)Let (R,+, ·) be an arbitrary ring, 1 ≤ m, n ∈ N and consider any matrixA ∈ matm,n(R) over R. Finally let x ∈ Rn and b ∈ Rm, then we obtain thefollowing statements

(i) Let us denote L(Ax = b) := x ∈ Rn | Ax = b the set of solutions ofthe linear system Ax = b of equations. If b 6∈ im(A) then there is nosolution L(Ax = b) = ∅. Conversely if b ∈ im(A) then there is somespecial solution u ∈ Rn with Au = b. And the set of solutions is thengiven to be

L(Ax = b) = u+ kn(A)

(ii) If S ∈ gln(R) and T ∈ glm(R) are any two invertible matrices, let usdenoteD := SAT ∈ matm,n(R). Thereby the following two statementsare equivalent

(a) Ax = b

(b) there is some z ∈ Rn such that x = Tz and Dz = Sb

(iii) If we use the same notation as in (i) and (ii), then the set of solutionsof Ax = b can be reformulated as

L(Ax = b) = T L(Dx = Sb)

4.3 Linear Equations 239

Page 240: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) Let us continue with (iii), if Sb 6∈ im(D) then there is no solutionL(Ax = b) = ∅. Conversely if Sb ∈ im(D) then there is some u ∈ Rnwith Du = Sb. And the set of solutions is then given to be

L(Ax = b) = Tu+ T kn(D)

This proposition provides a strategy for solving linear systems Ax = b ofequations: find invertible matrices S and T such that D = SAT becomes aneasy matrix. Well, D is easy if the equation Dx = Sb can be solved easily.This of course is the case, if D is something like a diagonal matrix - oneentry per column - as in this case we do not have an interlocked system ofequations, but simply n seperate equations of the form aixi = bi.

So we will have to define, when the matrix D is easy (this will be calledthe Gauss-form) and we have to provide a way of finding S and T - this willbe the Gauss-algorithm. Then we only have to solve Du = Sb (supposedsuch an u exists), as then L(Ax = b) = Tu+ T kn(D).

(4.15) Definition:Let (R,+, ·) be an arbitrary ring and 1 ≤ m,n ∈ N, then for any n-tuplea = (a1, . . . , an) ∈ Rn with a 6= 0 we denote

µ(a) := min k ∈ 1 . . . n | ak 6= 0 ∈ 1 . . . n

and specifically for a = 0 we let µ(a) :=∞. That is µ(a) is the index of thefirst non-zero entry of a. Analogously for an (m× n)-matrix A ∈ matm,nRwith A = (ai,k) 6= 0 we define µ(A) to be the index of the first non-zeroentry of any of its rows, that is

µ(A) := minµ(rowiA) | i ∈ 1 . . .m : ak 6= 0 = min k ∈ 1 . . . n | ∃ i ∈ 1 . . .m : ai,k 6= 0

and specifically for A = 0 we let µ(A) :=∞ again. Thus if A 6= 0 we triviallyhave µ(A) ∈ 1 . . . n, as before.

(4.16) Definition:Let (R,+, ·) be an arbitrary ring and 1 ≤ m,n ∈ N again, then a matrixA ∈ matm,n(R) is said to be in echelon-form, iff either A = 0 is zero, orA 6= 0 and there is some r ∈ 1 . . .m such that:

(1) µ(row1A) < · · · < µ(rowrA) and

(2) ∀ i ∈ N with r < i ≤ m we have rowiA = 0

That is the non-zero entries of A form a descending (from top left to bottomright) stair. In particular the zero-rows of A are at the bottom of A. NowA is said to be in Gauss-form, iff A is in echelon-form and any non-zerorow begins with a 1 ∈ R (not just any non-zero element of R). That is: letr ∈ N be as before, r := max i ∈ 1 . . .m | rowiA 6= 0 and let us abbreviatek(i) := µ(rowiA) for any i ∈ 1 . . . r, then A is in Gauss-form, iff also

(3) ∀ i ∈ 1 . . . r we get ai,k(i) = 1

(4.17) Example:The following (3 × 4)-matrix features r = 2, µ(row1A) = 1, µ(row2A) = 3

240 4 Linear Algebra

Page 241: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

and µ(row3A) = ∞ and hence is in echelon-form. However it is not inGauss-form, as ent2,3A = 2 6= 1

A =

1 0 −1 40 0 2 00 0 0 0

The next example of a (4 × 4)-matrix is not in echelon-form, as the sec-ond column features a double-step. Formally that is µ(row2B) = 2 andµ(row2B) = 2 which contradicts the strict inequality µ(row2B) < µ(row2B)

B =

1 0 0 10 1 0 00 1 1 10 0 0 1

(4.18) Example:If a system of linear equations is in Gauss form, then it is truly possible tosolve this system. As an example, let us consider the following system

0 1 2 −3 10 0 1 −4 50 0 0 0 10 0 0 0 0

x =

−5720

Written explicitly - where we denote x = (x1, x2, x3, x4, x5) as usual - thissystem abbreviates the following set of equations

(1) x2 + 2x3 − 3x4 + x5 = −5(2) x3 − 4x4 + 5x5 = 7(3) x5 = 2(4) 0 = 0

First of all it is clear that equation (4) is a void condition, the set L ofsolutions is solely determined by equations (1) to (3). This would have beenentirely different, if we had the equation 0 = 1, in this case we had L = ∅,as there would be no specific solution u. Also the variable x1 does not occurin the system at all - due to col1A = 0. Consequently the set L of solutionswill be of the form R× L for some subset L ⊆ R4.

Let us now turn our attention to solving this system: In this case wealready know x5 = 2 and this allows to deduce x3 = 7+4x4−5x5 = −3+4x4

from equation (2). This again can be used in equation (1) to obtain x2 =−5− 2x3 + 3x4 − x5 = −7− 2(−3 + 4x4) + 3x4 = −1− 5x4. Altogether thesolutions x are of the form

x =

x1

−1− 5x4

−3 + 4x4

x4

2

=

0−1−302

+ x1

10000

+ x4

0−5410

4.3 Linear Equations 241

Page 242: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

So as could have been expected from (4.14) the set of solutions L is of theform L = u + K where u = (0,−1,−3, 0, 2) is a specific solution to thesystem and K is the kernel of A, here K is a (rank 2) submodule of R5.

242 4 Linear Algebra

Page 243: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(4.19) Lemma: (viz. 359) Gaussian AlgorithmLet (R,+, ·) be a commutative ring, 1 ≤ m, n ∈ N and consider any (m×n)-matrix A = (ai,j) ∈ matm,n(R) over R. First recall the definition of theelementary matrices (4.10), then proceed:

There is an (m × m)-matrix S ∈ matm(R) such that S is a product ofelementary matrices S = S1 . . . S` where Sk ∈ elmmR and

E := S A ∈ matm,n(R)

is in echelon-form. And this matrix S can be found using the followingGaussian-algorithm, that iterates on the number m of columns of A:

(1) Initialisation: Pick up a working matrix E = (ei,j) := A and beginwith S := 11m. The algorithm starts with row number r := 1.

(2) Termination: If the r-th row and all rows below that are zero (that isrowi(E) = 0 for any i ∈ r . . .m), then E is in echelon form and hencewe may terminate. Else we commence with step (3)

(3) Targetting: Among the r-th to last row of E seek out the row with thefirst non-zero entry, say this is the s-th row. To be precise we take

u := minµ(rowvE) | v ∈ r . . .m s := min t ∈ r . . .m | µ(rowtE) = u

(4) The Pivot: Swap the r-th and the s-th row of E, that is we move therow with the first non-zero entry up to the top row of our current step

S := Pr,sS

E := Pr,sE

(5) The Procedure: Now, for any i ∈ (r + 1) . . .m proceed as follows: ifei,u = 0 then ignore this i, else - if ei,u 6= 0 - replace

S := Ti,r(−ei,u)Di(er,u)S

E := Ti,r(−ei,u)Di(er,u)E

(6) The Recursion: Supposed r < m consider the next row, that is replacer := r + 1 and iterate to step (2). If m = r, then terminate thealgorithm - E is in echelon-form. Note that step (2) may also terminatethe algorithm. In any case we find E = SA the matrix in echelon-form.

(4.20) Remark:

(i) Let us explain the Gaussian algorithm a bit: the main step (5) is wherethe actual computations are done. If we decode the operations of theelementary matrices they actually become quite simple: In fact for anyi ∈ (r + 1) . . .m the algorithm simply multiplies row number i wither,u and then subtracts ei,u times row number r, that is

rowiE := er,u(rowiE)− ei,u(rowrE)

4.3 Linear Equations 243

Page 244: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) As the algorithm is iterated at most m times and the r-th iterationrequires at most m− r row-operations (as described in (i) above) thealgorithm needs at most m(m− 1)/2 row-operations in total.

(iii) If R is a field, then all the elementary matrices are invertible elmmR ⊆glmR and hence S is invertible, as a product of such matrices.

(iv) If R is a field, the matrix E = (ei,j) in echelon form can easily beprocessed into a matrix G in Gauss-form: let r be the number of non-zero rows of E, that is r = # i ∈ 1 . . .m | rowiE 6= 0 . Further letk(i) := µ(rowiE) ∈ 1 . . . n and ei := ei,k(i) and

D := dia

(1

q1, . . . ,

1

qr, 1, . . . , 1

)G := DE

Then G is in Gauss-form, T = DS is a product of elementary matricesstill, and G = TA. And as we will see in (v) this allows to solve thesystem of linear equations Ax = b.

(v) Suppose we are given a system of linear equations Ax = b over a fieldR. We then use the Gaussian algorithm and (iv) to find a matrixT ∈ glmR such that G = TA is in Gauss form. Then by (4.14) the setof solutions of Ax = b is given to be:

L(Ax = b) = L(Gx = Tb)

Hence it remains to solve the equation Gx = Tb. But as G = (gi,j) isin Gauss form this is feasible: see the example above. For a more for-mal approach let us denote r = # i ∈ 1 . . .m | rowiG 6= 0 ∈ 0 . . .m,k(i) := µ(rowiG) ∈ 1 . . . n and gi := gi,k(i) again. Then we we caniteratively compute the solution beginning with i = r down to i = 1

xk(i) = [Tb]i −n∑

j=k(i)+1

gi,jxj

Thereby all the variables x1, . . . , xn \xk(1), . . . , xk(r)

remain un-

determined. That is any choice of these variables produces a solution,whereas the variables xk(1), . . . , xk(r) are determined by these. In otherwords the kernel of G is (n − r)-dimensional subspace of Rn and theset of solutions is of the form

L(Ax = b) = u+ kn(G)

(vi) It might be helpful to reformulate the Gaussian algorithm in pseudo-code, so that’s what we want to do now: first of all we will need thefollowing variables: A = (ai,j) and E = (ei,j) ∈ matn,mR, S = (si,j) ∈matmR also i, r, s and u ∈ N and finally swap ∈ Rn.

244 4 Linear Algebra

Page 245: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

S ← 11mE ← Ar ← 1while (r < m) and (∃ s ∈ r . . .m : rowsE 6= 0) do begin

u← minµ(rowvE) | v ∈ r . . .m s← min t ∈ r . . .m | µ(rowtE) = u if (s 6= r) then begin

swap← rowrErowrE ← rowsErowsE ← swap

end-iffor i := r + 1 to m step 1 do begin

rowiS ← sr,u(rowiS)− si,u(rowrS)rowiE ← er,u(rowiE)− ei,u(rowrE)

end-forr ← r + 1

end-while

(4.21) Example:We would like to enlighten things a little, by presenting an example of theworkings of the Gaussian algorithm in the case of the following example

A =

0 1 2 −3 10 1 4 −7 60 2 4 −6 20 −1 1 3 0

First of all the Gauss-algorithm initializes E := A, S := 11m and starts withrow r = 1 now, as A is non-zero the algorithm does not terminate rightaway. As the 1-st column of E has a minimal non-zero entry we get u = 2and s = 1. Now as r = s there is no need to pivot (we have e2,1 = 1 6= 0).

In the central step, for i = 2, we get S = T2,1(−1)D2(1) and multiply Ewith T2,1(−1)D2(1) from the left. That is the 2-nd row is multiplied with1 (which changes nothing) and then −1 times the 1-st row is added to the2-nd row. Likewise for i = 3 we multiply with T3,1(−2)D3(1) from the leftagain. That is the 3-rd row is multiplied with 1 (void again) and then −2times the 1-st row is added to the 3-rd row. Finally for i = 4 we multiplywith T4,1(1)D4(1) from the left, that is we add the 1-st row to the 4-throw. So far we’ve got S = T4,1(1)D4(1)T3,1(−2)D3(1)T2,1(−1)D2(1) andE = SA which amounts to

1 0 0 0−1 1 0 0−2 0 1 01 0 0 1

0 1 2 −3 20 1 4 8 −40 2 4 0 10 −1 −2 3 0

=

0 1 2 −3 20 0 2 −4 50 0 0 0 00 0 3 0 1

Now the Gauss-algorithm iterates with r = 2. The first column with non-zero entries (from row 2 on) is s = 2 with u = 3, so again there is no need topivot (e3,2 = 2 6= 0). However for i = 3 we find e3,2 = 0 and hence we con-tinue with i = 4 already. Hereby we multiply with T4,2(−3)D4(2) from theleft which is: first multiply row 4 with 2 and then add −3 times the secondrow to row 4. So S has risen to S = T4,2(−3)D4(2)T4,1(1)T3,1(−2)T2,1(−1)

4.3 Linear Equations 245

Page 246: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(omitting identity matrices) and this step yielded the matrices1 0 0 00 1 0 00 0 1 00 −3 0 2

0 1 2 −3 20 0 2 −4 50 0 0 0 00 0 3 0 1

=

0 1 2 −3 20 0 2 −4 50 0 0 0 00 0 0 12 13

As we have a zero-row row3E = 0 the following step r = 3 will be the lastone to take. But let’s stick to the letter: here we get u = 4 and s = 4 so wehave to swap rows 3 and 4 by multiplying P3,4 from the left. So by now wefound the matrices

S =

1 0 0 00 1 0 00 0 0 10 0 1 0

1 0 0 00 1 0 00 0 1 00 −3 0 2

1 0 0 0−1 1 0 0−2 0 1 01 0 0 1

=

1 0 0 0−1 1 0 05 −3 0 2−2 0 1 0

E =

1 0 0 0−1 1 0 05 −3 0 2−2 0 1 0

0 1 2 −3 10 1 4 −7 60 2 4 −6 20 −1 1 3 0

=

0 1 2 −3 10 0 2 −4 50 0 0 12 −130 0 0 0 0

In step r = 4 we see that row4E = 0 already, hence the termination condition(2) of the Gaussian algorithm is satisfied. And in fact, we see that E is inechelon-form already.

Note that this does not represent the modified linear system of equationsthat we computed in the introductory example. The reason is that in themanual computation we have evaded the common multiple of 2 in the firststep (2)− (1)→ (2). But the Gauss-algorithm in its general form does notcheck for common multiples. So we see that for UFDs there has to be arefined algorithm.

LR-Zerlegung

(4.22) Lemma: (viz. 247)Let (R,+, ·) be a commutative ring, m,n ∈ N and A,B ∈ matm×nR. Thenthe following statements hold true

(i) For any 1 ≤ r ≤ m,n the r-minors are invariant under elementaryeqivalence up to the sign, formally that is

A ≡ B =⇒ ±minorrA = ±minorrB

where for M ⊆ R we let ±M := m | m ∈M ∪ −m | m ∈M

(ii) If (R, ν) even is an euclidean domain and m ≤ n, then there arescalar factors d1, . . . , dm ∈ R that form a divisibly ascending chain,i.e. d1 | d2, . . . , dm−1 | dm such that

A ≡

d1 0 0 . . . 0. . .

......

0 dm 0 . . . 0

And for any size r ∈ 1 . . .m we find that the scalar factors d1, . . . , dm

246 4 Linear Algebra

Page 247: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

can be expressed in terms of minors of A

d1 . . . dr ∈ gcd(minorrA)

In particular the di are uniquely determined by A up to multiplicationwith units of R.

(iii) If (R, ν) even is an euclidean domain but n ≤ m then we can findscalar factors d1, . . . , dn ∈ R that form a divisibly ascending chain,i.e. d1 | d2, . . . , dn−1 | dn such that

A ≡

d1 0. . .

0 dn0 . . . 0...

...0 . . . 0

And for any size r ∈ 1 . . . n we find that the scalar factors d1, . . . , dncan be expressed in terms of minors of A

d1 . . . dr ∈ gcd(minorrA)

In particular the dj are uniquely determined by A up to multiplicationwith units of R.

Proof of (4.22):The proof of the first statement is a tedious distinction of several cases,and the second is a non-algorithmical trick computation. However both arenon-trivial and hence presented here. Now (iii) is clear, simply apply (ii) tothe transpose matrix of A.

(i) The proof will be conducted in several steps - the first two steps arethe technical part, the later parts will be applications of this that willyield the claim by induction

• We will first prove that for any permutation matrix P ∈ matmR

±minorr(P A) ⊆ ±minorr(A)

Thus choose any α ∈ A(r,m) and β ∈ A(r, n) and supposeP = P (σ). Then define α′ ∈ A(r,m) to be the index with therange α′(1 . . . r) = σ(α(1 . . . r)) and define σ′ ∈ Sr to be thepermutation that satisfies α′ σ′ = σ α. Then it is clear that

entα,βP (σ)A = P (σ′) entα′,βA

And hence the minor of P A selected by (α, β) is just the minorof A selected by (α′, β) (up to the sign of P (σ′)).

• Now we will prove that for any transvection matrix T ∈ matmR

minorr(T A) ⊆ minorr(A)

4.3 Linear Equations 247

Page 248: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Thus choose any α ∈ A(r,m) and γ ∈ A(r, n), then the cauchy-binet-formula reads as

det(entα,γT A) =∑

β∈A(r,m)

det(entα,βT ) det(entβ,γA)

It is clear that the submatrix entα,βT is a triangular matrix, sinceT was such. Thus the determinant only depends on the entriesin the diagonal. In the case α = β all these diagonal entries are1 and hence det(entα,αT ) = 1. In the case α 6= β there is (atleast one) 0 on the diagonal and hence det(entα,βT ) = 0. Thusthe sum over β collapses to the case β = α and hence

det(entα,γT A) = det(entα,γA)

In other words the respective minor just remains unchanged.

• In the previous two steps we have seen that for any elementarymatrix E ∈ matmR we get

±minorr(E A) ⊆ ±minorr(A)

But with E also E−1 is an elementary matrix and A = E−1E A.Therefore we can use this inclusion twice to obtain

±minorr(A) ⊆ ±minorr(E A) ⊆ ±minorr(A)

• Thus we have seen that for any elementary matrix E ∈ matmR

±minorr(E A) = ±minorr(A)

As the determinant is invariant under transposition of matriceswe see that for any elementary matrix F ∈ matnR we also get

±minorr(AF ) = ±minorr(A)

• Thus if A and B are elementary equivalent by virtue of the ma-trices E1, . . . , Er ∈ matmR and F1, . . . , Fs ∈ matnR

A ≡ B = E1 . . . Er AF1 . . . Fs

Then we may easily use induction on the number r+s of elemen-tary matrices taking the previous part as the induction step.

(ii) We begin by proving the existence of such a representation of A bya divisibly ascending chain of scalar factors. This will be the majorpart of the proof, the representation of the scalar factors in terms ofminors then is simple

• First of all we define - for any matrix 0 6= A = (ai,j) ∈ matm×nR

ν(A) := min ν(ai,j) | i ∈ 1 . . .m, j ∈ 1 . . . n, ai,j 6= 0

Now we regard the set of all ν(B) where B is a matrix that is

248 4 Linear Algebra

Page 249: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

elementaryly eqivalent to A

ν(B) | B ∈ matm×nR, A ≡ B ⊆ N

As N is well-ordered this set has a minimum and we choose sucha matrix M for which ν(M) is minimal.

• Suppose that the (i, j)-th entry mi,j of M takes the minimal valueν(M) = ν(mi,j). Then we may rearrange M to shift mi,j to theposition (1, 1) and we denote this rearranged matrix by

M ′ := P (1 i)M P (1 j)

I.e. ent1,1M′ = enti,jM = mi,j and hence ν(M ′) = ν(ent1,1M

′).Thus we may equally well assume that ν(M) = ν(m1,1)

• We now claim that m1,1 divides all the entries mi,1 in the firstcolumn of M . To see this we use division with remainder

mi,1 = pim1,1 + ri where 0 ≤ ν(ri) < ν(m1,1)

Suppose ri 6= 0 then we would obtain a matrix B := Ti,1(−pi)Mbeing elementary equivalent to M and hence to A, too. Clearlythe (i, 1)-entry of B is given to be

[rowiB]1 = [rowiM − pirow1M ]1 = mi,1 − pim1,1 = ri

Therefore ν(B) ≤ ν(ri) < ν(m1,1) = ν(M), but M was chosen tobe minimal, so this cannot occur.

• Analogously we see that m1,1 divides all the entries m1,j in thefirst row of M . We use division with remainder again

m1,j = qjm1,1 + si where 0 ≤ ν(sj) < ν(m1,1)

Suppose sj 6= 0 then we would obtain a matrix B := M T1,j(−qj)that is elementary equivalent to A again. Clearly the (1, j)-entryof B is then given to be

[coljB]1 = [coljM − qjcol1M ]1 = m1,j − qjm1,1 = sj

Therefore ν(B) ≤ ν(sj) < ν(m1,1) = ν(M), but M was chosen tobe minimal, so this cannot occur.

• We now use elementary transpositions to replace the entries ofthe first column mi,1 7→ ri = 0 and of the first row m1,j 7→ sj = 0.And this obviously is accomplished by

N := T2,1(−p2) . . . Tm,1(−pm)M T1,2(−q2) . . . T1,n(−qn)

4.3 Linear Equations 249

Page 250: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

As we have seen this matrix N now is of the following form

N =

m1,1 0 . . . 0

0 ∗ . . . ∗...

......

0 ∗ . . . ∗

Let us abbreviate N = SM T where S and T denote the respec-tive products of transposition matrices. Then we will have a lookat the entries of N for

enti,jN = [coljSM T ]i

= [coljSM − qjcol1SM ]i

= enti,jSM − qjenti,1SM

= mi,j − pim1,j

= mi,j − piqjm1,1

• We finally prove that m1,1 = n1,1 even divides any other entryni,j = mi,j − piqjm1,1 (with i ∈ 2 . . .m and j ∈ 2 . . . n) of N aswell. First we use division with remainder once more

ni,j = hi,jn1,1 + ki,j where 0 ≤ ν(ki,j) < ν(n1,1)

Now regard the matrix N ′ := Ti,1(1)N , clearly the i-th row ofN ′ assumes the form

rowiN′ = rowiN + row1N =

(n1,1 ni,2 . . . ni,n

)Thus defining N ′′ := N ′ T1,j(−hi,j) yields another matrix with

coljN′′ = coljN

′ − hi,jcol1N′ =

...ki,j

...

Hence ν(N ′′) ≤ ν(ki,j) < ν(n1,1) = ν(M). But N ′′ is elementarilyequivalent to N and hence to A and as M was chosen to minimalamong these matrices we find ki,j = 0 again.

• Thus we have decomposed N into a scalar block (m1,1) and ascalar multiple of a block A1 of the size (m− 1)× (n− 1)

N = (m1,1)⊕ (m1,1A1)

Thus we let d1 := m1,1 and recurse the entire process startingwith A1. Using induction we find after m− 1 steps that A trulyis equivalent to a matrix D of the claimed form.

250 4 Linear Algebra

Page 251: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• Until now we have shown that A is elementarily equivalent to D

A ≡ D =

d1 0 0 . . . 0. . .

......

0 dm 0 . . . 0

As the determinant of a submatrix of D always is a product ofthe coefficients of the submatrix (and hence of D) we find

0 ∪dα(1) . . . dα(r) | α ∈ A(r,m)

= minorrD

Since the di form a divisibly ascending chain it is clear that thegreatest common divisor of these elements is just d1 . . . dr. Butby (ii) the gcd of the minors is invariant under elementary equiv-alence and therefore

d1 . . . dr ∈ gcd(minorrD) = gcd(minorrA)

2

4.3 Linear Equations 251

Page 252: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

4.4 Multilinear Forms

rem: Permutationsgruppe Sn rem: signum, als Zahl der Transpositionen,aus Zyklenzerlegung, durch Abzaehlen

def: Multilinearform, Semi-Algebra unter punktweiser Verknuepfung,symmetrisch, alternierend pro: L symmetrisch, dann L(xσ(1), . . . , xσ(n)) =L(x1, . . . , xn) L alternierend, dann L(xσ(1), . . . , xσ(n)) = sgn(σ)L(x1, . . . , xn)pro: symmetrische MLF bilden Unter-Halb-Algebra aller MLF alternierendeMLF bilden R-Untermodul aller MLF

def: aufsteigende Indices pro: symmetrische Multilinearformen auf (Rm)n

alternierende Multilinearformen auf (Rm)n Eindeutigkeit der alternierendeMultilinearformen auf (Rn)n

4.5 Determinants

4.6 Rank of Matrices

Beispiel: A = (uivj) ist Ax = 〈x | v〉u, insbesondere rank(A) = 1

4.7 Canonical Forms

252 4 Linear Algebra

Page 253: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 5

Spectral Theory

5.1 Metric Spaces

metric convergence relation completeness completion linear metrics infinitesums

5.2 Banach Spaces

normed space sub-multiplicative norm Banach space Banach algebra

5.3 Operatoralgebras

5.4 Diagonalisation

5.5 Spectral Theorem

253

Page 254: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 6

Structure Theorems

6.1 Associated Primes

Skripten von Dimitrios und BenjaminMatsumura Kapitel 6 (in Teil 2)Bourbaki (commutative algebra) IV.1, IV.2Eisenbud 3.1, 32., 3.3, Aufgaben 3.1, 3.2, 3.3Dummit, Foote: Kapitel 15.1: Aufgaben 29, 30, 31, 32, 33, 34 und 35

Kapitel 15.4: Aufgaben 30, 31, 32, 33, 34, 35, 36, 37, 38, 39 und 40 Kapitel15.5: Aufgaben 25, 26, 28, 29 (bemerke assR ⊆ ass(a) geht viel einfacherdirekt). und 30

6.2 Primary Decomposition

6.3 The Theorem of Prufer

6.4 Length of Modules

254

Page 255: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 7

Polynomial Rings

7.1 Monomial Orders

We have already introduced the polynomial ring R[t] in one variable (overa commutative ring R) as an example in section 1.4. And we have alsoused this concept in the chapter on linear algebra (e.g. the characteristicpolynomial of a linear mapping). In this chapter we will now introduce apowerful generalisation of this ring - the polynomial ring (also called groupring) R[A]. Thereby R is a commutative ring once more and A will beany commutative monoid. On the other hand polynomial rings are naturalexamples of graded algebras. So it may be advantegeous to first study gradedalgebras as a general concept. This will be done in the subsequent section.

There are a multitude of special cases of this concept which we willregard in this chapter, as well. The most important special case will be thepolynomial ring R[t1, . . . , tn] in (finitely many) variables. The polynomialsf ∈ R[t1, . . . , tn] are the working horses of ring theory, as they describe allthe compositions (of additions and multiplications) that can be performed ina ring. This makes the study of the polynomial ring a task of vast importanceand likewise gives the theory its punch.

The polynomial ring R[t] is included into this general theory as thepolynomial ring R[A] where A is chosen to be the the natural numbersA = N. Yet the algabraic structure of N is not the sole important propertyfor R[t]. The natural order ≤ on N is of no lesser importance. Therefore wewill first study moniods A that carry an adequate order, as well.

So first recall the definitition of a linear order (also called total order). Ifyou are not familiar with this notion take a look at the introduction (section0.2) or (re)read the beginning of section 2.1 for even more comments. So letuns now introduce the basic concepts of ordered monoids:

255

Page 256: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(7.1) Definition:Let (A,+) be a commutative monoid and denote its neutral element by 0.Then we introduce the following notions concerning A

• A is said to be integral iff for any α, β, γ ∈ A we get the implication

α+ γ = β + γ =⇒ α = β

• Further we A is said to be solution-finite iff for any fixed γ ∈ A theequation α+ β = γ has finitely many solutions only, formally that is

∀ γ ∈ A : #

(α, β) ∈ A2 | α+ β = γ< ∞

(7.2) Definition:

(i) The triple (A,+,≤) is said to be a positively ordered monoid, iffit satisfies both of the following two properties

(1) (A,+) is a commutative monoid (with neutral element 0)

(2) ≤ is a positive, linear order on A. That is ≤ is a linear order suchthat for any elements α, α′ and β ∈ A we get the implication

α ≤ α′ =⇒ α+ β ≤ α′ + β

(ii) The triple (A,+,≤) is said to be a strictly positively orderedmonoid, iff it satisfies both of the following two properties

(1) (A,+) is a commutative monoid (with neutral element 0)

(2) ≤ is a strictly positive, linear order on A. That is ≤ is a linearorder such that for any elements α, α′ and β ∈ A we get

α < α′ =⇒ α+ β < α′ + β

Note that thereby we used the convention a < b :⇐⇒ a ≤ b and a 6= b.Thus it is clear that a strictly positively ordered monoid already is apositively ordered monoid (if α = α′, then α+ β = α′ + β is clear).

(iii) Finally (A,+,≤) is said to be an monomially ordered monoid and≤ is said to be a monomial order on (A,+) iff we get

(1) (A,+,≤) is a positively ordered monoid

(2) ≤ is a well-ordering on A, that is iff any non-empty subset M ofA has a minimal element, formally this can be written as

∀ ∅ 6= M ⊆ A ∃µ∗ ∈M such that ∀µ ∈M we get µ∗ ≤ µ

(7.3) Remark:If now (A,+,≤) is a positively ordered monoid, we extend A to A by formallypicking up a new element −∞ (i.e. we introduce a new symbol to A)

A := A ∪ −∞

256 7 Polynomial Rings

Page 257: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

We then extend the composition + and order ≤ of A to A defining thefollowing operations and relations for any α ∈ A

(−∞) 6= α

(−∞) ≤ α

(−∞) + α := −∞α+ (−∞) := −∞

Nota clearly (A,+) thereby is a commutative monoid again and if ≤ hasbeen a positive or monomial order on A then the extended order on A is apositive resp. monimial order again.

(7.4) Example:The standard example of a monoid that we will use later is Nn under thepointwise addition as the composition. However we will already discuss aslight generalisation of this monoid here. Let I 6= ∅ be any non-empty indexset and let us define the set

N⊕I :=α ∈ NI | # i ∈ I | αi 6= 0 <∞

If I is finite, say I = 1 . . . n, then it is clear, that N⊕(1...n) = Nn. We nowturn this set into a commutative monoid by defining the composition + tobe the pointwise addition, i.e. for any α = (αi) and β = (βi) we let

(αi) + (βi) := (αi + βi)

And it is clear that this composition turns N⊕I into a commutative monoid,that is integral and solution-finite (for any γ ∈ N⊕I the set of solutions(α, β) of α+ β = γ is determined given to be (α, γ − α) | αi ≤ γi , whichis finite, as γ ∈ N⊕I). For any i ∈ I we now define

δi : I → N : j 7→ δi,j

where δi,j denotes the Kronecker symbol (e.g. refer to the notation andsymbol list). From the construction it is clear, that any α = (αi) ∈ N⊕I isa (uniquely determined) finite sum of these δi since

(αi) =∑i∈I

αi δi

This sum in truth is finite as only finitely many coefficients αi are non-zeroand we can omit those i ∈ I with αi = 0 from the sum. This enables us tointroduce some notation here that will be put to good use later on.

• Let us first define the absolute value and norm of α = (αi) ∈ N⊕I

|α| :=∑i∈I

αi

‖α‖ := maxαi | i ∈ I

• For a fixed weight ω ∈ NI we introduce the weighted sum of α (note

7.1 Monomial Orders 257

Page 258: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

that this is well-defined, as α ∈ N⊕I only has finitely many non-zeroentries and hence the sum is finite only)

|α|ω :=∑i∈I

ωi αi

• Another oftenly useful notation (for any α ∈ N⊕I and any k ∈ N with‖α‖≤ k) are the faculty and binomial coefficients

α! :=∏i∈I

αi!(k

α

):=

∏i∈I

(k

αi

)

• And if x = (xi) ∈ RI is an n-tupel of elements of a commutativering (R,+, ·) we finally introduce the notations (note that these arewell-defined again, as α ∈ N⊕I has only finitely many αi 6= 0)

αx :=∑i∈I

αi xi

xα :=∏i∈I

xαii

(7.5) Proposition: (viz. 301)

(i) Let A 6= ∅ be a non-empty set and ≤ ⊆ A × A be a linear orderon A. Then any finite, non-empty subset ∅ 6= M ⊆ A has uniquelydetermined minimal and a maximal elements. I.e. for any ∅ 6= M ⊆ Awith #M <∞ we get

∃ ! µ∗ ∈M : ∀µ ∈M : µ∗ ≤ µ∃ ! µ∗ ∈M : ∀µ ∈M : µ ≤ µ∗

(ii) Let A 6= ∅ be a non-empty set and ≤ ⊆ A×A be a well-ordering on A.Then any non-empty subset ∅ 6= M ⊆ A has a uniquely determinedminimal element. I.e. for any ∅ 6= M ⊆ A we get

∃ ! µ∗ ∈M : ∀µ ∈M : µ∗ ≤ µ

Nota we will refer to these minimal and maximal elements of M bywriting minM := µ∗ and maxM := µ∗ respectively.

(iii) Let (A,+) be a commutative monoid and let ≤ ⊆ A × A be a linearorder on A. Then the following three statements are equivalent

(a) A is integral and ≤ is a positive order on A

(b) (A,+,≤) is a strictly positively ordered monoid

(c) for any α, β, γ ∈ A we get α < β ⇐⇒ α+ γ < β + γ

258 7 Polynomial Rings

Page 259: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) Let (A,+,≤) be a positively ordered monoid, such that (A,+) is in-tegral. Let further M,N ⊆ A be any two subsets and µ∗ ∈ M re-specitvely ν∗ ∈ N be minimal elements of M and N , i.e.

∀µ ∈M : µ∗ ≤ µ and ∀ ν ∈ N : ν∗ ≤ ν

Then for any elements µ ∈M and ν ∈ N we obtain the implication

µ+ ν = µ∗ + ν∗ =⇒ µ = µ∗ and ν = ν∗

(7.6) Example: (viz. 303)Let now (I,≤) be a well-ordered set (i.e. I 6= ∅ is a non-empty set and ≤ is alinear well-orderering on I) and fix any weight ω ∈ NI . Then we introducetwo different linear orders on the commutative monoid (N⊕I ,+). Of thesethe ω-graded lexicographic order will be the standard order we will employin later sections. Thus consider α = (αi) and β = (βi) ∈ N⊕I and define

• Lexicographic Order

α ≤lex β :⇐⇒ (α = β) or (∗)

(∗) := ∃ k ∈ I such that αk < βk and ∀ i < k : αi = βi

It will be proved below, that the lexicographic order ≤lex is positive,i.e. the triple (N⊕I ,+,≤lex) is a positively ordered monoid. In thecase of I = 1 . . . n we will see that ≤lex even is a well-ordering, i.e. thetriple (Nn,+,≤lex) is a monomially ordered monoid.

• ω-Graded Lexicographic Order

α ≤ω β :⇐⇒ (|α|ω < |β|ω) or (|α|ω = |β|ω and α ≤lex β)

Analogously to the above the ω-graded lexicographic order ≤ω is posi-tive, i.e. the triple (N⊕I ,+,≤ω) is a positively ordered monoid. And inthe case I = 1 . . . n it even is a well-ordering, i.e. the triple (Nn,+,≤ω)is a monomially ordered monoid once more.

7.1 Monomial Orders 259

Page 260: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(7.7) Remark: ()We wish to append two further linear orders on (N⊕I ,+). However we willnot require these any further and hence abstain from proving any detailsabout them. The interested reader is asked to refer to [Cox, Little, O’Shaea;Using Algebraic Geometry; ??] for more comments on these.

• Reverse Graded Lexicographic Order

α ≤rgl β :⇐⇒ (α = β) or (|α| < |β|) or (|α| = |β| and (2))

(2) := ∃ k ∈ I such that αk > βk and ∀ i < k : αi = βi

Just like ≤ω the reverse graded lexicographic order ≤rgl is positive andin the case of I = 1 . . . n it even is a well-ordering again.

• Bayer-Stillman OrderFor the fourt order we will require I = 1 . . . n already. Then we fixany m ∈ 1 . . . n and obtain another monomial order on Nn

α ≤bs(m) β :⇐⇒ (3) or ((4) and α ≤gl β)

(3) := α1 + · · ·+ αm < β1 + · · ·+ βm

(4) := α1 + · · ·+ αm = β1 + · · ·+ βm

260 7 Polynomial Rings

Page 261: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

7.2 Graded Algebras

(7.8) Definition:Let (D,+) be a commutative monoid and (R,+, ·) be a commutative ring.Then the ordered pair (A,deg) is said to be a D-graded R-(semi)algebra,iff it satisfies all of the following properties

(1) A is a commutative R-(semi)algebra

(2) deg : hom(A) → D is a function from some subset hom(A) ⊆ A toD, where we require 0 6∈ hom(A). Thereby an element h ∈ A is saidto be homogeneous, iff h ∈ hom(A) ∪ 0 .

(3) for any d ∈ D let us denote Ad := deg−1(d) ∪ 0 ⊆ A. Then werequire that Ad ≤m A is an R-submodule of A. And we assume thatA has a decomposition as an inner direct sum of submodules

A =⊕d∈D

Ad

(4) given any two c, d ∈ D we require that the product gh of homogeneouselements g of degree c and h of degree d is a homogeneous element ofdegree c+d. That is letting AcAd := gh | g ∈ Ac, h ∈ Ad we assume

AcAd ⊆ Ac+d

(7.9) Remark:

• By property (2) Ad is a submodule of A, that is the sum g + h of twohomogeneous elements of degree d again is a homogeneous element ofdegree d. And likewise the scalar multiple ah is a homogeneous elementof degree d as well. Thus for any g, h ∈ hom(A) with deg(g) = deg(h)and any a ∈ R such that ag + h 6= 0 we get

deg(ag + h) = deg(g) = deg(h)

• Property (4) asserts that the product of homogeneous elements is ho-mogeneous again, in particular hom(A) ∪ 0 is closed under multi-plication. And thereby the degree even is additive, that is for any twohomogeneous elements g, h ∈ hom(A) we get

gh 6= 0 =⇒ deg(gh) = deg(g) + deg(h)

7.2 Graded Algebras 261

Page 262: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• According to (3) any element f ∈ A has a unique decomposition interms of homogeneous elements. To be precise, given any f ∈ A, thereis a uniquely determined sequence of elements (fd) (where d ∈ D) suchthat fd ∈ Ad (that is fd = 0 or deg(fd) = d) the set d ∈ D | fd 6= 0 is finite and

f =∑d∈D

fd

Thereby fd is called the homogeneous component of degree d of f .And writing f =

∑d fd we will oftenly speak of the decomposition of

f into homogeneous components.

• It is insatisfactory to always distinguish the cases h = 0 when consid-ering a homogeneous element h ∈ hom(A) ∪ 0 ⊆ A. Hence we willadd a symbol −∞ to D as outlined in (7.3). Then the degree can beextended canonically to all homogeneous elements

deg : hom(A) ∪ 0 → D ∪ −∞ : h 7→

deg(h) if h 6= 0−∞ if h = 0

If A is an integral domain this turns ”deg” into a homorphism ofsemigroups, from hom(A)∪ 0 (under the multiplication of A) to D.That is for any g, h ∈ hom(A) ∪ 0 we now get

deg(gh) = deg(g) + deg(h)

• In the literature it is customary to call the decomposition A =⊕

dAditself a D-graded R-algebra, supposed AcAd ⊆ Ac+d. But this clearlyis equivalent to the definition we gave here. Just let

hom(A) :=⋃d∈D

Ad \ 0

Then we obtain a well-defined function deg : hom(A) → D by lettingdeg(h) := d where d ∈ D is (uniquely) determined by h ∈ Ad. Andthereby it is clear that (A,deg) becomes a D-graded R-algebra, withdeg−1(d) = Ad \ 0 again.

(7.10) Example:If (R,+, ·) is any commutative ring and (D,+) is any commutative monoid(with neutral element 0). Then there is a trivial construction turning R intoa D-graded R-algebra. Just let hom(R) := R \ 0 and define the degree bydeg : hom(R)→ D : h 7→ 0. That is R0 = R, whereas Rd = 0 for any d 6= 0.Thus examples of graded algebras abound, but of course this constructionwon’t produce any new insight.

(7.11) Remark:Given some commutative ring (A,+, ·) (and a commutative monoid (D,+))there are two canonical ways how to regard A as an R-algebra

(1) We could fix R := A, in this case the submodules Ad ≤m A occuringin property (3) would be ideals Ad i A. So this would yield the quitepeculiar property AcAd ⊆ Ac+d ∩Ad = 0 (if c+ d 6= d).

262 7 Polynomial Rings

Page 263: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(2) Secondly we could fix R := Z, in this case the submodules Ad ≤m Aoccuring in property (3) would only have to be subgroups Ad ≤g A.Thus there no longer is a reference to the algebra structure of A. Hencewe will also speak of a D-graded ring in this case.

(7.12) Example: Rees Algebra (♦)Let (R,+, ·) be a commutative ring and a i R be an ideal of R. Notethat for any n ∈ N the quotient an/an+1 becomes an R/a-module under thefollowing, well-defined scalar multiplication (a+ a)(f + an+1) := af + an+1.Let us now take to the exterior direct sum of these R/a-modules

R(a) :=⊕n∈N

an/an+1

Then R(a) can be turned into an R/a-algebra (the so called Rees algebra ofR in a), by defining the following multiplication

(fn + an+1)(gn + an+1) :=

∑i+j=n

figj + an+1

Let us denote R(a)n := an/an+1 where we consider R(a)n as a subsetR(a)n ⊆ R(a) canonically (as in (3.22)). Further we define the homogeneouselements of R(a) to be hom(R(a)) :=

⋃nR(a)n \ 0 . Then (R(a), deg) be-

comes an N-graded R/a-algebra under the graduation

deg : hom(R(a))→ N : fn + an+1 7→ n

(7.13) Definition:Let (D,+) and (E,+) be commutative monoids, (R,+, ·) be a commutativering, (A,deg) be a D-graded and (E,deg) be an E-graded R-algebra. Thenthe ordered pair (ϕ, ε) is said to be a homorphism of graded algebrasfrom (A,deg) to (B, deg) (or shortly a graded homorphism) iff it satisfies

(1) ϕ : A→ B is a homorphism of R-algebras

(2) ε : D → E is a homorphism of monoids

(3) ∀h ∈ hom(A) we get deg(ϕ(h)) = ε(deg(h))

7.2 Graded Algebras 263

Page 264: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(7.14) Remark:

• If (A,deg) is a D-graded R-algebra and a i A is a graded idealof (A,deg), then (a,deg) clearly becomes a D-graded R-semi-algebraunder the graduation inherited from (A,deg)

hom(a) := hom(A) ∩ a

deg := deg∣∣hom(a)

• If both (A < deg) and (B, deg) are D-graded R-algebras, then a ho-morphism ϕ : A→ B of R-algebras is said to be graded iff (ϕ, 11 ) is ahomorphism of graded algebras, that is iff for any h ∈ hom(A) we get

deg(ϕ(h)) = deg(h)

(7.15) Definition: (viz. 352)Let (D,+) be a commutative monoid, (R,+, ·) be a commutative ring and(A,deg) be a D-graded R-algebra. Then a subset h ⊆ A is said to be agraded ideal (or homogeneous ideal) of (A,deg), iff h i A is an ideal, thatsatisfies any one of the following three equivalent properties

(a) h can be decomposed as an inner direct sum of the R-submodules h∩Ad

h =⊕d∈D

h ∩Ad

(b) h is closed under going to homogeneous components. That if given anyf ∈ A with homogeneous decomposition f =

∑d fd we get

f =∑d∈D

fd ∈ h =⇒ ∀ d ∈ D : fd ∈ h

(c) h is generated by homogeneous elements, that is there is some set Hof homogeneous elements of A generating h. Formally that is

∃H ⊆ hom(A) such that h = 〈H 〉i

264 7 Polynomial Rings

Page 265: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(7.16) Proposition: (viz. 353)Let (D,+) be a commutative monoid (with neutral element 0), (R,+, ·) be acommutative ring and (A,deg) be a D-graded R-algebra. Then the followingstatements are true

(i) If D is an integral monoid, then 1 ∈ A0

(ii) If (B, deg) is another D-graded R-algebra and ϕ : A→ B is a gradedhomorphism, then the kernel of ϕ is a graded ideal in A

kn(ϕ) =⊕d∈D

kn(ϕ) ∩Ad

(iii) If a i A is a graded ideal of (A,deg), then the quotient A/a becomeda D-graded R-algebra again under the induced graduation

hom(A/

a

):= h+ a | h ∈ hom(A) \ a

where deg(h + a) := deg(h). And thereby the set of homogeneouselements of degree d ∈ D is precisely given to be the following(

A/a

)d

= Ad + a/a

(iv) Let (D,+) even be a commutative group and consider a multiplica-tively closed subset U ⊆ A such that U ⊆ hom(A). Then the locali-sation U−1A (as a ring) is a D-graded R-algebra again. Thereby

hom(U−1A

):=

h

u

∣∣∣∣ h ∈ hom(A), u ∈ U\

0

1

Then the graduation is defined to be deg(h/u) := deg(h)−deg(u) andwe find the modules of homogeneous elements (for any d ∈ D)

(U−1A

)d

=

h

u

∣∣∣∣ u ∈ U, h ∈ Ad+deg(u)

(v) Suppose D = N and consider some ideal a i A. If now a is a gradedideal, then its radical

√a i A is a graded ideal, too.

7.2 Graded Algebras 265

Page 266: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(7.17) Remark:The above statement (iv) has an easy generalisation to the case where (D,+)only is an integral, commutative monoid. In this case U−1A becomes a grD-graded R-algebra, where grD denotes the extension of D to a group. For-mally that is grD := (D×D)/ ∼ where (c, d) ∼ (c′, d′) :⇐⇒ c+d′ = c′+d.Then grD becomes a well-defined group under (c, d)+(c′, d′) := (c+c′, d+d′)with neutral element (0, 0) and −(c, d) = (d, c). And D is included canoni-cally into grD by virtue of D → grD : c 7→ (c, 0).

(7.18) Definition:Let (R,+, ·) be a commutative ring, (D,+,≤) be a positively ordered monoidand (A, deg) be a D-graded R-algebra. If now f ∈ A with f 6= 0 has thehomogeneous decomposition f =

∑d fd (where fd ∈ Ad) then we define

deg(f) := max d ∈ D | fd 6= 0 ord(f) := min d ∈ D | fd 6= 0

Note that this is well-defined, as only finitely many fd are non-zero and ≤is a linear order on D. Thus we have defined two funtions on A

deg : A \ 0 → D

ord : A \ 0 → D

(7.19) Proposition: (viz. 357)Let (R,+, ·) be a commutative ring, (D,+,≤) be a positively ordered monoidand (A,deg) be a D-graded R-algebra. Then we obtain

(i) The newly introduced function deg : A \ 0 → D is an extension ofthe degree original function deg : hom(A)→ D. Formally that is

h ∈ Ad, h 6= 0 =⇒ deg(h) = d

(ii) It is clear that the order and degree of some homogeneous elementh ∈ hom(A) will agree. But even the converse is true, that is for anynon-zero element f ∈ \ 0 we get the equivalency

f ∈ hom(A) ⇐⇒ ord(f) = deg(f)

(iii) For any elements f , g ∈ A such that fg 6= 0 we obtain the estimates

ord(f) ≤ deg(f)

deg(fg) ≤ deg(f) + deg(g)

ord(fg) ≥ ord(f) + ord(g)

(iv) And if (D,+,≤) is a strictly positively ordered monoid and A is anintegral domain, then for any f , g ∈ A\ 0 we even get the equalities

deg(fg) = deg(f) + deg(g)

ord(fg) = ord(f) + ord(g)

266 7 Polynomial Rings

Page 267: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) Thus if (D,+,≤) is a strictly positively ordered monoid and A 6= 0is a non-zero integral domain again, then the invertible elements of Aalready are homogeneous

A∗ ⊆ hom(A)

(vi) If (D,+,≤) is a strictly positively ordered monoid, A is an integraldomain and h ∈ hom(A) is a homogeneous element of A, then we get

f | h =⇒ f ∈ hom(A)

7.2 Graded Algebras 267

Page 268: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

7.3 Defining Polynomials

7.4 The Standard Cases

7.5 Roots of Polynomials

7.6 Derivation of Polynomials

7.7 Algebraic Properties

7.8 Grobner Bases

7.9 Algorithms

268 7 Polynomial Rings

Page 269: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 8

Polynomials in One Variable

8.1 Interpolation

8.2 Irreducibility Tests

8.3 Symmetric Polynomials

8.4 The Resultant

8.5 The Discriminant

8.6 Polynomials of Low Degree

8.7 Polynomials of High Degree

8.8 Fundamental Theorem of Algebra

269

Page 270: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 9

Group Theory

normal subgroups, kernel+image, isomorphism theorems, quotient groups,group actions, conjugation, sylow theorems, p-q theorem, ordered groups,representation theory, invariant theory

(9.1) Definition:Let (G, ) be a group and X be an arbitrary set, then a map µ : G×X → X(written as µ : (g, x) 7→ gx) is said to be an operation of G on X, iff itsatisfies one of the following two equivalent conditions

(a) G→ SX : g 7→ Tg where Tg : X → X : x 7→ gx is a group-homorphism

(b) ∀ g, h ∈ G and ∀x ∈ G we get the properties ex = x and (gh)x = g(hx)

And in this case we define the following notions: for any x ∈ X we definethe orbit Gx ⊆ X and the stabilizer Gx ⊆ G. And for any g ∈ G wealso define the fixed point set Xg ⊆ X and the set XG of invariants

Gx := gx | g ∈ G Gx := g ∈ G | gx = x GX := g ∈ G | ∀x ∈ X : gx = x Xg := x ∈ X | gx = x XG := x ∈ X | ∀ g ∈ G : gx = x

Finally a subset A ⊆ X is said to be G-invariant iff for any g ∈ G we get

gA := gx | x ∈ A ⊆ A

(9.2) Example:

(i) If X 6= ∅ is any non-empty set then any subgroup S ≤g SX of thepermutation group SX operates on X in a most obvious way

S ×X → X : (σ, x) 7→ σ(x)

(ii) By (1.12.(ii)) any group G operates on itself, by multiplication fromthe left, that is we get an operation on G by

G×G→ G : (g, h) 7→ gh

270

Page 271: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) Finally any group G operates on itself by conjugation, that is we obtainanother action of G on G by letting

G×G→ G : (g, h) 7→ hg := ghg−1

Prob first of all it is clear that for any x ∈ G we have xe = exe−1 =x. Also for any g, h ∈ G we can compute xgh = (gh)x(gh)−1 =ghxh−1g−1 = g(xh)g−1 = (xh)g, so that both properties are satisfied.

(9.3) Proposition: (viz. 300)

(i) It is easy to see that an operation G×X → X defines an equivalencyrelation on X by letting x ∼ y :⇐⇒ ∃ g ∈ G : y = gx. Andthe equivalency classes of this relation clearly are the orbits Gx. Thequotient set of X under this relation is denoted by

X/G := Gx | x ∈ X

(ii) It also is immedeately clear that the stabilizer Gx ≤g G is a subgroupof G. Another well-known fact is the following 1-to-1 correspondence

Gx ←→ G/Gx : gx 7→ gGx

(iii) If A ⊆ X is G-invariant, then for any g ∈ G we have gA = A. Andwe can thereby restrict % to an operation of G on A, that is

% : G×A→ A : (g, x) 7→ gx

(iv) If A is G-invariant, then the complement X \A is G-invariant, too.

(v) Orbit Formulas: If (G, ) is a finite group, that operates on the finiteset X, then (for any x ∈ X) the following equations are true

#Gx =#G

#Gx

#X = #G ·∑

Gx∈X/G

1

#Gx

(vi) Formula of Burnside: If (G, ) is a finite group, that operates onthe finite set X, then the following equation holds true

#G ·#X/G =

∑g∈G

#Xg =∑x∈X

#Gx

(9.4) Definition:Let G be a group and F be any field. Then a group-homorphism χ : G→ F∗

from G to the multiplicative group F∗ is called a character of G.

(9.5) Proposition: (viz. 273)Let G be any group and F be any field. If now χ1, . . . , χr : G → F∗ are

271

Page 272: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

pairwise characters of G, then the set χ1, . . . , χr ⊆ F(G,F) already isF-linear independant (as F-valued functions).

(9.6) Definition:Let G be any group and R be a commutative ring. Then µ : G×R→ R issaid to be an algebraic operation, iff we get the properties

(1) µ : G×R→ R is an operation of G on R

(2) for any g ∈ G the map Tg : R→ R : a 7→ ga is a homomorphism

And in this case we define the invariant ring under the operation µ to be

RG := a ∈ R | ∀ g ∈ G : ga = g

Nota as Tg is bijective with T−1g = Tg−1 we find that (2) already implies

that any Tg even is an automorphism of the ring R. And it is immediatelyclear that the invariant ring RG ≤r R is a subring of R.

(9.7) Proposition: (viz. 275)Let G be a group, R be a commutative ring and A be a commutative monoid.If now µ : G×R→ R is an algebraic operation, then this canonically inducesan algebraic operation G×R[A]→ R[A] on the group ring R[A], by

G×R[A]→ R[A] : (g, f) 7→∑α∈A

(gf [α])tα

And for this operation we obtain the following identity of the invariant rings

R[A]G = RG[A]

(9.8) Proposition: (viz. 273)Let G × R → R : a 7→ ga be an algebraic operation of the group G on thecommutative ring R. Let now a i R be any ideal, then we get

(i) aG := a ∈ a | ∀ g ∈ G : ga = a = a ∩RG i RG

(ii) If a is an G-invariant ideal (that is ga ⊆ a for any g ∈ G) then weobtain an algebraic operation of G on the quotient ring, via

G×R/a→

R/a : (g, b+ a) 7→ gb+ a

(iii) And in case a is an G-invariant ideal, then we obtain a well-defined,injective homorphism of rings by virtue of

RG/aG →

(R/

a

)G: b+ aG 7→ b+ a

(9.9) Proposition: (viz. 274)Let G×X → X : (g, x) 7→ gx be a group operation and let R be a commu-tative ring. Further consider A := F(X,R) the R-algebra of functions from

272 9 Group Theory

Page 273: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

X to R. Then we obtain an algebraic operation of G on A by letting

G×A→ A : (g, f) 7→(x 7→ f(g−1x)

)And if M ⊆ X is a G-invariant subset (that is gM ⊆ M for any g ∈ G),then I(M) := f ∈ A | ∀x ∈M : f(x) = 0 is a G-invariant ideal (that isI(M) i A is an ideal and gI(M) ⊆ I(M) for any g ∈ G). And in this casewe finally we obtain the isomorphy

AG/I(M)G

∼→(A/I(M)

)G: f + I(M)G 7→ f + I(M)

Proof of (9.5):If r = 1 then the linear independance is clear, as χ1 6= 0. So pick up theminimal number k ≥ 2 such that there are a1 . . . , ak ∈ F∗ and (pairwisedistinct) i1, . . . , ik ∈ 1 . . . r satisfying

a1χi1 + · · ·+ akχik = 0

renumbering the χi we may assume i1 = 1 and i2 = 2 and so on until ik = k.Let us now denote bi := ai/ak then we obtain the central equation

k−1∑i=1

biχi + χk = 0

As χ1 6= χk (remember k ≥ 2) there is some h ∈ G such that χ1(h) 6= χk(h).For any g ∈ G let us now insert gh into the central equation, this yields

k−1∑i=1

biχi(g)χi(h) + χk(g)χk(h) = 0

On the other hand (for any g ∈ G we may first insert g into the centralequation and afterwards multiply with χk(h). This yields another equation

k−1∑i=1

biχi(g)χk(h) + χk(g)χk(h) = 0

Subtraction these two equations we obtain another identity (for any g ∈ G)

k−1∑i=1

bi(χk(h)− χk(h)

)χi(g) = 0

As any ci := bi(χk(h)− χk(h)) ∈ F is scalar we have found a linear depen-dance with fewer that k elements. As k has been minimal this can only beif ci = 0 for any i ∈ 1 . . . (k − 1). But by construction we have c1 6= 0, acontradiction. Thus k could not have be chosen in the first place and thisis just the linear independance of the χi.

2

273

Page 274: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (9.8):

(i) From the definition of aG it is clear that aG = a ∩ RG. Yet R : RG isa ring extension and hence it is a general fact that a ∩RG i R

G.

(ii) We first prove the well-definedness of the operation: consider b, b′ ∈ Rwith b+ a = b′ + a. That is b′ − b ∈ a and hence (as a is G-invariant)gb′ − gb = g(b′ − b) ∈ ga ⊆ a. And this again yields gb+ a = gb′ + a.Now as b 7→ gb+ a is nothing but the composition of b 7→ gb 7→ gb+ awe also find that b+a 7→ gb+a is a homorphism. That is the operationeven is algebraic.

(iii) Let us first consider RG → (R/a)G : b 7→ b + a. This clearly is awell-dfined homorphism as g(b+ a) = gb+ a = b+ a. And the kernelof this homorphism clearly is

b ∈ RG | b ∈ a

= a∩RG = aG. Hence

the map in the claim is well-defined and injective.

2

Proof of (9.9):As ef(x) = f(e−1x) = f(x) we clearly have ef = f for any f ∈ A. Andgiven g, h ∈ G, f ∈ A and x ∈ X it is straightforward to compute

(h(gf))(x) = (gf)(h−1x) = f(g−1h−1x) = f((gh)−1x) = ((gh)f)(x)

Thus G× A→ A is a group action. And the operations of A are pointwiseit is clear that this operation also is algebraic. Next we prove that I(M) isa G-invariant ideal. The properties of an ideal are immediatley clear. Letus consider f ∈ I(M), g ∈ G and any x ∈ M . As M is G-invariant we getg−1x ∈ g−1M ⊆ M . And as f ∈ I(M) this yields (gf)(x) = f(g−1x) = 0.As x has been arbitary, this proves gf ∈ I(M), which also yields the G-invariance gI(M) ⊆ I(M). It remains to prove

AG/I(M)G

∼→(A/I(M)

)G: f + I(M)G 7→ f + I(M)

We have already proved the well-definedness and injectivity of this map in(9.9). For the surjectivity we are given some f + I(M) which is G-invariant.That is gf + I(M) = f + I(M) for any g ∈ G. In other words that isf(g−1x) = (gf)(x) = f(x) for any g ∈ G. Let us denote 1M : X → Rthe characteristic function of M and let f := 1Mf . If x 6∈ M then weclearly have f(x) = 1M (x)f(x) = 0. And as X \M is G-invariant as well,we get 1M (g−1x) = 0 and hence (gf)(x) = f(g−1x) = 1M (g−1x)f(x) =0, too. Thus f(x) = 0 = gf(x) for any x ∈ X \ M . Conversely ifx ∈ M then 1M (x) = 1 = 1M (g−1x) which yields gf(x) = f(g−1x) =1M (g−1x)f(g−1x) = f(g−1x) = (gf)(x) = f(x) = 1M (x)f(x) = f(x) asx ∈M . Altogether we have proved f = gf , (for any x ∈ X) which is the G-invariance of f . And we have already seen that f(x) = f(x) for any x ∈M .But we have also seen, that f +I(M) = f +I(M), which means nothing butf + I(M)G 7→ f + I(M) under the map in question.

2

274 9 Group Theory

Page 275: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (9.7):We will first prove that G × R[A] → R[A] truly is a group action. To dothis consider any g and h ∈∈ G and any f ∈ R[A]. By definition we clearlyhave ef = f . And further we find

g(hf) = g

(∑α∈A

(hf [α])tα

)=∑α∈A

g(hf [α])tα =∑α∈A

(gh)f [α]tα = (gh)f

From the homorphism property of R→ R : a 7→ ga we find that (g, f) 7→ gftruly is an algebraic operation (we omit tedious computations). So it onlyremains to prove R[A]G = RG[A]. The inclusion ” ⊇ ” is trivial. Converselyconsider any f ∈ R[A]G. Then we have to show that for any α ∈ A and anyg ∈ G we get gf [α] = f [α]. Yet to see this it suffices to compare coefficients∑

α∈A(gf [α])tα = gf = f =

∑α∈A

f [α]tα

2

275

Page 276: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 10

Multilinear Algebra

10.1 Multilinear Maps

10.2 Duality Theory

10.3 Tensor Product of Modules

10.4 Tensor Product of Algebras

10.5 Tensor Product of Maps

10.6 Differentials

Matsumura - commutative ring theory - chapter 9

276

Page 277: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 11

Categorial Algebra

11.1 Sets and Classes

App: Cantor’s definition of sets, App: Russel’s antinomy, Rem: formal-isation - the language Lset, Def: axiomatisation of set theory (ZFC) andconsequences (the foundation axiom, the axiom of choice), Lem: AC ⇐⇒Zorn ⇐⇒ well ordering theorem, Def: Peano’s axioms, App: constructionof N, Def: axioms of the arithmetic, Def: construction Z, Q and R, App:Heuristical definition of classes (2-nd level sets), Exp: the universe of sets,Rem: formalisation - the language Lclass, Def: axioms of classes (NBG)

11.2 Categories and Functors

definition of categories,(full) subcategories, small categories, ordinary cate-gories, monomorphisms, epimorphisms, isomorphisms, definition of functorsRem: covariant, contravariant via opposite category, composition of func-tors, isofunctors, representable functors, Yoneda’s lemma, natural equiva-lence, equivalence of categories

(11.1) Definition:Let C be any category, then an object U ∈ obj(C) is said to be universal, ifffor any other object X ∈ obj(C) there is a uniquely determined morphismgreekph from U to X. Formally iff

∀ X ∈ obj(C) : ∃ ! ϕ ∈ C(U,X)

(11.2) Lemma: (viz. 277)If C is an arbitrary category, than any two universal objects U and V ∈obj(C) are isomorphic U ∼= V .

Proof of (11.2):As U is universal there is a unique morphism Φ : U → V , likewise as Vis universal there is a unique morphism Ψ : V → U . Hence ΦΨ : U → Uis a morphism and as 11U : U → U is another morphism the uniquess (Uis universal) implies ΦΨ = 11U . Likewise we find ΨΦ = 11 V and henceΦ : U → V is an isomorphism, with inverse morphism Ψ : V → U .

277

Page 278: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

11.3 Examples

categories: Set, Top, Ring, Dom, Mod(R), Loc, CS , C•, Bun(•), oppo-site category, functor categroy, functors: •/a, U−1•, hom, spec

11.4 Products

definition of (co)products, examples: ×, ⊕, ⊗ , fibered product, injectiveand projective objects, definition of (co)limits

11.5 Abelian Categories

additive categories and their properties, additive functors, examples, kernel,cokernel and their properties, examples, canonical decomposition of mor-phisms, definition of abelian categories, left/right-exactness, examples: •/a,U−1•, hom

11.6 Homology

chain (co)complexes, (co)homology modules, (co)homology morphisms, ex-amples

278 11 Categorial Algebra

Page 279: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 12

Ring Extensions

basics, transfer of ideals, integral extensions, dimension theory, [Eb, exercise9.6] gibt Beispiel fur ∞-dimensionalen noetherschen Ring

279

Page 280: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 13

Galois Theory

280

Page 281: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 14

Graded Rings

graduations, homogeneous ideals, Hilbert-Samuel polynomial, filtrations,completions

281

Page 282: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 15

Valuations

siehe Bourbaki (commutative algebra) VI, mein Skriptum

282

Page 283: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Part II

The Proofs

Page 284: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 16

Proofs - Fundamentals

In this chapter we will finally present the proofs of all the statements (propo-sitions, lemmas, theorems and corollaries) that have been given in the pre-vious part. Note that the order of proofs is different from the order in whichwe gave the statements, though we have tried to stick to the order of thestatements as tightly as possible. So we begin with proving the statementsin section 1.1. In this section however we have promised to fill in a gap first:we still have to present a formal notion of applying brackets to a product.And also we have to establish a simple fact from elementary number theory:

Division with Remainder:Consider any d ∈ N with d ≥ 1 which we call the divisor. Then for anyx ∈ Z there are uniquely determined q, r ∈ Z such that

(1) x = qd+ r

(2) r ∈ 0 . . . (d− 1)

This result will be largely generalized in section 2.6 and proved in greatergenerality, but as we use this fact for Z we have to give a proof beforehand.The numbers q and r in the statement above can even be given explictly

q = x div d := max k ∈ Z | kd ≤ x d = x mod d := x− (x div d) · d

Prob Existence: as x ≤ |x| ≤ |x| · d then set k ∈ Z | kd ≤ x is boundedfrom above by |x|. Hence the maximum q := max k ∈ Z | kd ≤ x is welldefined. Also take r := x − qd ∈ Z. Then (1) x = qd + r is clear from thedefinition of r and as qd ≤ x we also have r ≥ 0. Now suppose r ≥ d thenwe would have 0 ≤ r− d = x− qd− d = x− (q+ 1)d such that (q+ 1)d ≤ x.But as q has been maximal, this is absurd, which only leaves r < d, suchthat (2) is established, as well.

Uniqueness: suppose x = qd+r and x = ud+v for some q, r, u and v ∈ Zwith r and v ∈ 0 . . . (d−1). Then 0 = x−x = qd+r−(ud+v) = (q−u)d+r−v.That is d divides (q − u)d = v − r. But as 0 ≤ v, r < d we have |v − r| < dand hence v − r can only be divided by d, if v − r = 0. That is r = v andtherefore qd+ r = ud+ v implies qd = ud which also yields q = u.

2

284

Page 285: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Formal Bracketing (♦)If we multiply (or sum up) several elements x1, . . . , xn ∈ G of a groupoid(G, ) we’ve got (n − 1) possibilities, with which two elements we start -even if we keep their ordering. And we have to keep choosing until wefinally got one element of G. In this light it is quite clear, what we meanby saying ”any way of applying parentheses to the product of the xi resultsin the same element”. It just says, that the result actually doesn’t dependon our choices. However - as we do math - we wish to give a formal way ofputting this. To do this first define a couple mappings for 2 ≤ n ∈ N andk ∈ 2 . . . (n− 2)

b(n,1) : Gn → Gn−1 : (x1, . . . , xn) 7→ (x1x2, x3, . . . , xn)

b(n,k) : Gn → Gn−1 : (x1, . . . , xn) 7→ (x1, . . . , xkxk+1, . . . , xn)

b(n,n−1) : Gn → Gn−1 : (x1, . . . , xn) 7→ (x1, . . . , nn−2, xn−1xn)

Now we can determine the sequel of choosing the parentheses simply byselecting an (n− 1)-tupel k in the following set

Dn := k = (k1, . . . , kn−1) | ki ∈ 1 . . . (n− i) for i ∈ 1 . . . (n− 1)

For fixed k ∈ Dn the bracketing (by k) now simply is the following mapping

Bk := b(2,kn−1) · · · b(n,k1) : Gn → G

Proof of (1.2) (associativity): (♦)We now wish to prove the fact that in a groupoid (G, ) the product ofthe elements x1, . . . , xn is independent of the order in which we appliedparentheses to it. Formally this can now be put as: for any k ∈ Dn we get

Bk(x1, . . . , xn) = x1x2 . . . xn

We will prove this statement by induction on the number n of elementsmultiplied. In the cases n = 1 and n = 2 there only is one possibility how toapply parentheses and hence the statement is trivial. The case n = 3 thereare two possibilities, which are equal due to the associativity law (A) of(G, ). Thus we assume n > 3 and regard two ways of applying parenthesesk, l ∈ Dn, writing them in terms of the last multiplication used

a := Bk(x1, . . . , xn) = Bp(x1, . . . , xi)Bp(xi+1, . . . , xn)

b := Bl(x1, . . . , xn) = Bq(x1, . . . , xj)Bq(xj+1, . . . , xn)

where i := k1, j := l1 and p := (k2, . . . , kn−1), q := (l2, . . . , ln−1). Wemay assume i ≤ j and if i = j we are already done because of the induc-tion hypothesis. Thus we have i < j and by the induction hypothesis theparentheses may be rearranged to

a =(

(x1 . . . xi)(

(xi+1 . . . xj)(xj+1 . . . xn)))

b =((

(x1 . . . xi)(xi+1 . . . xj))

(xj+1 . . . xn))

285

Page 286: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now the associativity law again implies a = b and as k and l were arbitrarythis implies a = x1x2 . . . xn for any bracketing k chosen.

2

Proof of (1.5):

• Let (G, ) be any group, then ee = e = ee and hence e = e−1 bydefinition of the inverse element. Likewise if y = x−1 then xy = e = yxand hence x = y−1. Then we prove (xy)−1 = y−1x−1 by

(xy)(y−1x−1) = x(y(y−1x−1)

)= x

((yy−1)x−1

)= x

(ex−1

)= xx−1 = e

(y−1x−1)(xy) = y−1(x−1(xy)

)= y−1

((x−1x)y

)= y−1

(ey)

= y−1y = e

• Next we will prove (xk)−1 = (x−1)k by induction on k ≥ 0. If k = 0then the claim is satisfied, as e−1 = e. Now we conduct the inductionstep, using what we have just proved:(xk+1

)−1=(xkx

)−1= x−1

(xk)−1

= x−1(x−1

)k=(x−1

)k+1

• Now assume that xy = yx do commute. Then it is easy to see thatx−1y−1 = y−1x−1 = (xy)−1 do commute as well, just compute

y−1x−1 = (xy)−1 = (yx)−1 = x−1y−1

And by induction on k ≥ 0 it also is clear, that xky = yxk. Nowwe will prove (xy)k = xkyk by induction on k. In the case k = 0everything is clear (xy)0 = e = ee = x0y0. Now compute(

xy)k+1

=(xy)k(

xy)

=(xkyk

)(yx)

= xk(yk(yx)

)= xk

(yk+1x

)= xk

(xyk+1

)= xk+1yk+1

And for negative k we regard −k where k ≥ 0, then we easily compute(xy)−k

=(

(xy)−1)k

=(x−1y−1

)k=

(x−1

)k(y−1)k

= x−ky−k

• Note: in the last step of the above we have used (x−1)k = x−k. Butwe still have to prove this equality: as for any x ∈ G we find thatxx−1 = e = x−1x do commute we also obtain x−k = (xk)−1 = (x−1)k

286 16 Proofs - Fundamentals

Page 287: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

for any k ≥ 0, just compute

x−kxk =(x−1

)kxk =

(x−1x

)k= ek = e

xkx−k = xk(x−1

)k=(xx−1

)k= ek = e

• This also allows us to prove (xk)l = xkl for any k, l ∈ Z. We willdistinguish four cases - to do this assume k, l ≥ 0 then(

xk)l

= xkxk . . . xk (l − times) = xkl

(x−k

)l=(

(x−1)k)l

=(x−1

)kl= x−(kl) = x(−k)l

(xk)−l

=(

(xk)−1)l

=(

(x−1)k)l

= x−(kl) = xk(−l)

(x−k

)−l=(

((x−1)k)−1)l

=(

((xk)−1)−1)l

= xkl = x(−k)(−l)

• It remains to prove xkxl = xk+l for any k, l ∈ Z. As above wewill distinguish four cases, that is we regard k, l ≥ 0. In this casexkxl = xk+l is clear by definition. Further we can compute

x−kx−l =(x−1

)k(x−1

)l=(x−1

)k+l= x−(k+l) = x(−k)+(−l)

For the remaining two cases we first show x−1xl = xl−1. In case l ≥ 0this is immediately clear. And in case l ≤ 0 we regard −l with l ≥ 0

x−1x−l = x−1(x−1

)l=(x−1

)l+1= x−(l+1) = x(−l)−1

We are now able to prove x−kxl = x(−k)+l by induction on k. Thecase k = 0 is clear, for the induction step we compute

x−(k+1)xl = (x−1)kx−1xl = (x−1)kx−1xl = x−kxl−1 = x−(k+1)+l

The fourth case finally is xkx−l = xk+(−l), but as xk and x−l commute(use inductions on k and l) this readily follows the above.

2

Proof of (1.7):The first statement that needs to be verified is that a subgroup P ≤g Gtruly invokes an equivalence relation x ∼ y ⇐⇒ y−1x ∈ P . This relationclearly is reflexive: x ∼ x due to x−1x = e ∈ P . And it also is transitivebecause if x ∼ y and y ∼ z then y−1x ∈ P and z−1y ∈ P . Thereby z−1x =z−1(yy−1)x = (z−1y)(y−1x) ∈ P too, which again means x ∼ z. Finally itis symmetric as well: if x ∼ y then y−1x ∈ P and hence x−1y = (y−1x) ∈ P

287

Page 288: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

too, which is y ∼ x. Next we wish to prove that truly [x] = xP

[x] = y ∈ G | y ∼ x =y ∈ G | x−1y = p ∈ P

= y ∈ G | ∃ p ∈ P : y = xp = xp | p ∈ P = xP

So it only remains to prove the theorem of Lagrange. To do this we fix asystem P of representatives of G/P . That is we fix a subset P ⊆ G suchthat P ←→ G/P : q 7→ qP (which is possible due to the axiom of choice).Then we obtain a bijection by letting

G ←→(G/P

)× P : x 7→

(xP, q−1x where xP = qP

)This map is well defined: as xP ∈ G/P there is a uniquely determined q ∈ Psuch that qP = xP . And for this we get q ∼ x which means q−1x ∈ P . Andit is injective: if xP = yP = qP then q−1x = q−1y yields x = y (bymultiplication with q). Finally it also is surjective: given (qP, p) we letx := qp ∈ qP and thereby obtain q ∼ x and hence (xP, q−1x) = (qP, p).

2

Proof of (1.8):We first have to prove, that 〈X 〉o is a well-defined submonoid of G. AsX ⊆ G and G is a submonoid, the intersection runs over a non-empty setand hence is well-defined. Also, as e ∈ Xe, we clearly have e ∈ 〈X 〉o. Andif x and y ∈ 〈X 〉o then for any P ≤o G with X ⊆ P we have x, y ∈ P .But as P is a submonoid this implies xy ∈ P again and as this is true forany P we also get xy ∈ 〈X 〉o again.

Likewise if G is a group, then (by the same arguments as above) 〈X 〉gis a submonoid of G. It remains to prove that it even is a subgroup: But asP is a subgroup this implies x−1 ∈ P again and as this is true for any P wealso get x−1 ∈ 〈X 〉o again.

Next we will prove the explict representation of 〈X 〉o, to to this let usdenote the set of finite products of elements of Xe by

Q := x1x2 . . . xn | n ∈ N, x1, . . . , xn ∈ Xe

As e ∈ Xe it is clear that e ∈ Q. And if x = x1 . . . xm and y = y1 . . . yn ∈ Qthen by construction we also have xy ∈ Q. That is: Q is a submonoid ofG and as X ⊆ Q is clear we have 〈X 〉o ⊆ Q. Conversely, as 〈X 〉o is amonoid with X ⊆ 〈X 〉o we have e and x ∈ 〈X 〉o for any x ∈ X. That isXe ⊆ 〈X 〉o. Therefore, if x1, . . . , xn ∈ Xe then x1 . . . xn ∈ 〈X 〉o again andthis also is Q ⊆ 〈X 〉o. Together we have Q = 〈X 〉o.

We finally have to prove the explicit representation of 〈X 〉g, to do thislet us abbreviate the set of finite products of elements of X± by

Q := x1x2 . . . xn | n ∈ N, x1, . . . , xn ∈ X±

If x ∈ X we also have x−1 ∈ X± and as (x−1)−1 = x we find that x−1 ∈ X±for any x ∈ X±. That is X± is closed under the inversion of elements.Also e ∈ X± is trivial. And if x = x1 . . . xm and y = y1 . . . yn ∈ Q thenby construction we also have xy ∈ Q. But also x−1 = x−1

m . . . x−11 ∈ Q, as

288 16 Proofs - Fundamentals

Page 289: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

with xi ∈ X± we also have x−1i ∈ X±. Hence Q is a subgroup of G and

as X ⊆ Q is clear we have 〈X 〉g ⊆ Q. Conversely, as 〈X 〉g is a groupwith X ⊆ 〈X 〉g we have e, x and x−1 ∈ 〈X 〉g for any x ∈ X. That isX± ⊆ 〈X 〉g. Therefore, if x1, . . . , xn ∈ X± then x1 . . . xn ∈ 〈X 〉g againand this also is Q ⊆ 〈X 〉g. Together we have Q = 〈X 〉g.

2

Proof of (1.9):It is clear that Q :=

xk | k ∈ Z

is a subgroup of G, as for any i, j ∈ Z

we have xixj = xi+j ∈ Q and (xi)−1 = x−i ∈ Q. Also, as x ∈ Q we get〈x 〉g ⊆ Q. But as x ∈ 〈x 〉g and 〈x 〉g is a group we clearly have xk ∈ 〈x 〉gfor any k ∈ Z and hence Q ⊆ 〈x 〉g as well. Altogether 〈x 〉g = Q.

Now suppose n := #G ∈ N is finite, then Q ⊆ G is finite, too, sayk := #Q. If now i, j ∈ N with i < j then (by multiplication with x−i fromthe left) we clearly have

xi = xj ⇐⇒ xj−i = e

Thus if we had xp 6= e for any p ∈ N then xi 6= xj for any i < j and hence〈x 〉g had infinitely many elements. As this is untrue, there has to be somep ∈ N with 1 ≤ p such that xp = e. Let m ∈ N be the minimal such number

m := min 1 ≤ p ∈ N | xp = e

As xm = e is the first occurance of e we have xi 6= e for any 1 ≤ i < m.And this implies xi 6= xj for any i, j ∈ N, with i 6= j and i, j < m. Hencewe have found m distinct elements in 〈x 〉g, namely

xi | i ∈ 0 . . . (m− 1)⊆ 〈x 〉g

Conversely consider any xq ∈ 〈x 〉g and write q = pm+i by division of q by mwith remainder i ∈ 0 . . . (m− 1). Then xq = xpm+i = (xm)pxi = epxi = xi.That is 〈x 〉g only consists of elements of the form xi for some 0 ≤ i < m andhence the above sets even are equal. In particular we have m = k. And as〈x 〉g is a subgroup of G the theorem of Lagrange (1.7) truly implies k | n.

2

Proof of (1.10):If σ and τ : X → X then the composition στ : x 7→ σ(τ(x)) is anothermap of the form στ : X → X. And if both σ and τ are bijective, so is στwith inverse (στ)−1 = τ−1σ−1. Hence SX is closed under the compositon .And the associativity of the composition is well-known. Also 11X : x 7→ xis bijective and hence 11X ∈ SX . Hence (SX , ) is a monoid. And as σ−1 isthe inverse element of σ and bijective again we have σ−1 ∈ SX again, suchthat SX is a group.

2

Proof of (1.12):

289

Page 290: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) By induction on the number of elements n := #X: In case n = 1 wehave X = x for some single element x ∈ X. Hence the one anyonly map on X is given to be σ(x) = x. In particular #SX = 1 =1! = (#X)!. Now for the induction step we consider a set with n + 1elements X = x1, . . . , xn, xn+1 . Then we can decompose SX intothe following disjoint union

SX =n+1⋃i=1

F (i) where F (i) := σ ∈ SX | σ(xn+1) = xi

Clearly, if σ(xn+1) = xi, then σ induces a bijection between the setsX(i) := X \ xi and X(n+ 1). As X(i) always contains n elementswe can choose a bijection αi : X(i) ←→ X(n+1). And this now givesrise to a bijection between

F (i) ←→ SX(n+1) : σ 7→ αiσ∣∣X(n+1)

In particular for any i ∈ 1 . . . n + 1 the set F (i) contains as manyelements, as SX(n+1). And by induction hypothesis, as #X(n+1) = nthis is n!. So altogether

#SX =n+1∑i=1

#F (i) = (n+ 1)n! = (n+ 1)!

(ii) First of all we see that for any g, h and x ∈ G the associativity ofG gives birth to the following equality: Lgh(x) = (gh)x = g(hx) =Lg(hx) = Lg(Lh(x)) = LgLh(x). And as x ∈ G has been arbitrarythis is Lgh = LgLh. Also it is clear that Le(x) = ex = x = 11G(x) suchthat Le = 11G. In particular Lg−1 is the inverse map of Lg since

Lg−1Lg = Lg−1g = Le = 11G

LgLg−1 = Lgg−1 = Le = 11G

In particular Lg : G→ G not only is a map, but even a bijective one.Hence Lg ∈ SG and this makes L well-defined. It remains to show theinjectivity of L but this is easy: if Lg = Lh then g = ge = Lg(e) =Lh(e) = he = h.

2

Proof of (1.13):

(i) As S :=σk | k ∈ Z

is a subgoup of the finite group Sn (this has

been shown in (1.12.(i))), we know from (1.9), that there is some k ∈ Nwith σk = 11 , say k = ord(σ). In particular σk(i) = i such that thecycle cyc(σ, i) if finite, with `(σ, i) ≤ k = ord(σ).

(ii) Suppose we had b := σ(a) ∈ fix(σ), then we had σ(a) = b = σ(b). Butas σ is injective this implies a = b ∈ fix(σ) which is absurd, since westarted with a ∈ dom(σ). Hence we can restrict σ to

σ : dom(σ) → dom(σ) : a 7→ σ(a)

290 16 Proofs - Fundamentals

Page 291: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

But as σ is injective and dom(σ) is a finite set (it is contained in 1 . . . n)this implies that the restriction of σ even is surjective, in particularσ(dom(σ)) = dom(σ).

If now i ∈ dom(σ) then we will use induction on k to prove that anyσk(i) lies within dom(σ): The case k = 0 is true by assumption σ0(i) =i ∈ dom(σ). Now σk+1(i) = σ(σk(i)) and by induction hypothesis weknow σk(i) ∈ dom(σ). Hence σk+1(i) ∈ σ(dom(σ)) = dom(σ) again.

(iii) If % = σ then dom(%) = dom(σ) and %(a) = σ(a) for any a ∈ D aretrivial, as we even have %(a) = σ(a) for any a ∈ 1 . . . n. Conversely letD = dom(%) = dom(σ). If a ∈ D then %(a) = σ(a) by assumption.But if a 6∈ D then we have a ∈ fix(%) such that %(a) = a. Likewisea ∈ fix(σ) such that σ(a) = a = %(a). Hence we have %(a) = σ(a) forany a and this is % = σ as claimed.

(iv) If a ∈ dom(%) then by assumption we have a 6∈ dom(σ) such thata ∈ fix(σ). In particular we get

%σ(a) = %(a)

And as we know %(a) ∈ dom(%) again, because of (ii), we likewise have%(a) ∈ fix(σ) such that we also find

σ%(a) = %(a)

That is %σ(a) = %(a) = σ%(a) for any a ∈ dom(%). In complete analogy(by interchanging the roles of % and σ) we find (for any a ∈ dom(σ))

σ%(a) = σ(a) = %σ(a)

And if finally a ∈ (1 . . . n) \ (dom(%) ∪ dom(σ)) = fix(%) ∩ fix(σ) thenit is clear that %σ(a) = %(a) = a and also σ%(a) = σ(a) = a. In anycase we have seen (for any a ∈ 1 . . . n) that %σ(a) = σ%(a) which is thecommutativity %σ = σ%.

2

Proof of (1.15):

• As long as the order of the ik is maintained, we still have ik 7→ ik+1

(and i` 7→ i1) no matter with wich element ik we begin. Hence

(i1 i2 . . . i`) = (i2 i3 . . . i` i1) = . . . = (i` i1 i2 . . . i`−1)

• Let us denote ζ := (i1 i`)(i1 i`−1) . . . (i1 i2), then given any a = iq letus regard ζ from the right: iq is unaffected by any (i1 ip) with p < q.Hence iq it first hits (i1 iq) which maps iq → i1. Now i1 immediatleystrikes (i1 iq+1) again mapping i1 7→ iq+1. And iq+1 passes throughunhampered, as it only meets (i1ir) whith r > q + 1 from now on.Altogether we have ζ : iq 7→ i1 7→ iq+1. A special case is a = i` as thisonly strikes (i1i`) at the very end, such that ζ : i` 7→ i1. Thereforeζ = (i1 i2 . . . i`).

291

Page 292: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• By induction on d := j − i ∈ N. The case d = 1 is trivial, as (i j) =(i i+ 1) in this case. For d ≥ 2 we have to regard (i j) = (i i+ d). Todo this we first prove that (i i+ d) = (i i+ 1)(i+ 1 i+ d)(i i+ 1) byexplicitly computing (from right to left):

(i i+ 1) (i+ 1 i+ d) (i i+ 1)i+ d ←[ i+ d ←[ i+ 1 ←[ ii+ 1 ←[ i ←[ i ←[ i+ 1i ←[ i+ 1 ←[ i+ d ←[ i+ d

As the domain of ζ = (i i + 1)(i + 1 i + d)(i i + 1) is dom(ζ) = i, i+ 1, i+ d it is clear, that ζ(a) = a for any a 6∈ i, i+ 1, i+ d .And as ζ(i) = i + d, ζ(i + 1) = i + 1 and ζ(i + d) = i we have trulyshown that ζ = (i i+ d). But by induction hypothesis (i+ 1 i+ d) isof distance d− 1 and can hence be written as

(i+ 1 i+ d) = (i+ 1 i+ 2) . . . (i+ d− 1 i+ d) . . . (i+ 1 i+ d)

Inserting this into (i i+ d) = (i i+ 1)(i+ 1 i+ d)(i i+ 1) we have alsoproved the case d ≥ 2 and hence for any transposition, as d may bearbitrarily large.

• Let us denote the cycle ξ := (σ(i1) σ(i2) . . . σ(i`)). Then is is clearthat dom(ξ) = σ(i1), σ(i2), . . . , σ(i`) and for any k ∈ 1 . . . `− 1 wehave ξ(σ(ik)) = σ(ik+1), also ξ(σ(i`)) = σ(i1). But on the other handit is easy to compute σζσ−1(σ(ik)) = σζ(ik) = σ(ik+1) and likewiseσζσ−1(σ(i`)) = σζ(i`) = σ(i1). Hence we have seen σζσ−1(a) = ξ(a)for any a = σ(ik) (where k ∈ 1 . . . `). If now a ∈ 1 . . . n is none of theσ(ik) (that is a 6∈ dom(ξ)) then σ−1(a) 6∈ i1, i2, . . . , i` = dom(ζ).Therefore we have σζσ−1(a) = σ(σ−1(a)) = a = ξ(a) again. Thatis σζσ−1 and ξ share the same domain and agree on it, such that by(1.13.(iii)) they are equal.

2

Proof of (1.16):

• (a) =⇒ (b): by assumption we have σ = (i1 i2 . . . i`) for somepairwise distinct ik ∈ 1 . . . n. Therefore it is clear that

dom(σ) = i1, i2, . . . , i`

Let us abbreviate i := i1, then by induction on k it is clear thatik = σk−1(i), because σk(i) = σ(σk−1(i)) = σ(ik) = ik+1. If we arenow given any a, b ∈ dom(σ), then there are r and s ∈ 1 . . . ` such thata = ir = σr−1(i) and b = is = σs−1(i). If s ≥ r then we can readilytake k := s− r ∈ N to see

σk(a) = σs−rσr−1(i) = σs−1(i) = b

And if s < r then we take k := `+ s− r ∈ N (which is positive, sincer ≤ `). Note that σ`(i) = σσ`−1(i) = σ(i`) = i1 such that we can

292 16 Proofs - Fundamentals

Page 293: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

likewise compute

σk(a) = σ`+s−rσr−1(i) = σs−1σ`(i) = = σs−1(i) = b

Thus in any case we have seen that b = σk(i)(a) which had to beshown.

• (b) =⇒ (c): choose any i ∈ dom(σ) then by assumption we haveb = σk(i) for any b ∈ dom(σ) for some sufficient k ∈ N. That is

dom(σ) ⊆σk(i) | k ∈ N

But the converse inclusion is a generat truth, for any permutation, aswe have shown in (1.17.(ii)). And hence the above inclusion even is anequality of sets. Now (c) =⇒ (d) is trivial - note that dom(σ) 6= ∅ isnonempty, as we assumed σ 6= 11 .

• (d) =⇒ (e): all we have to do is prove, that the higher powers k ofσk(i) are already contained in σr(i) | r ∈ 0 . . . `−1. So consider anyk ≥ `, then we use division with remainder to write k as k = m` + rfor some m ∈ N and r ∈ 0 . . . ` − 1. As ` = ord(σ) we have σ` = 11such that we are enabled to compute

σk(i) = σm`+r(i) =(σ`)m

σr(i) = 11mσr(i) = σr(i)

Hence any σk(i) is contained in σr(i) | r ∈ 0 . . . ` − 1 such that weget dom(σ) = σr(i) | r ∈ 0 . . . `− 1 due to the assumption (d).

• (e) =⇒ (a): let us take ik := σk−1(i) for any k ∈ 1 . . . `. We firstprove that there elements are truly pairwise distinct: Suppose ir = isfor some r 6= s, without loss of generality we can take r < s. Thenσr−1(i) = σs−1(i) implies σs−r(i) = i (by multiplication with σ1−r

from the left). That is there would have to be some h = s − r ∈ Nwith σh(i) = i. Thereby h = s−r < s ≤ ` such that h ∈ 1 . . . `−1. As` = ord(σ) is the minimal number with σ` = 11 this implies σh 6= 11 ,that is there has to be some a ∈ 1 . . . n with σh(a) 6= a. In particulara ∈ dom(σ) as else we would also have σh(a) = a. By assumptionthere hence is some k ∈ 0 . . . `− 1 such that a = σk(i). Now compute

σh(a) = σhσk(i) = σkσh(i) = σk(i) = a

in contradiction to σh(a) 6= a. Hence the assumption ir = is must havebeen false, that is the ik are pairwise distict. Hence we are entitledto define ζ := (i1 i2 . . . i`). Then for any k ∈ 1 . . . ` − 1 it is clearthat ζ(ik) = ik+1 = σk(i) = σ(σk−1(i)) = σ(ik). However we also findζ(i`) = i1 = i = σ`(i) = σ(σ`−1(i)) = σ(i`). That is we have shownthat ζ(a) = σ(a) for any a ∈ dom(ζ). But as

dom(ζ) = i1, i2, . . . , i` =σk(i) | k ∈ 0 . . . `− 1

= dom(σ)

is true by assumption, this implies ζ = σ. That is we have shown thatσ truly is the cycle generated by the ik.

293

Page 294: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• The fact that ` = ord(σ) can be easily seen directly. But the impli-cation (a) =⇒ (e) already yields ord(σ) = `, as the length ` of thecycle appears in (e) as the order of σ, since σk−1(i) = ik.

2

Proof of (1.17):

(i) Let us abbreviate X := 1 . . . n and S :=σk | k ∈ Z

. Then this

proof could be abbreviated by noting that S ×X → X : (σ, i) 7→ σ(i)is a group action with the orbits Si = cyc(σ, i). Anyhow we want togive a direct proof that does not require group actions:

Clearly ∼ is an equivalency relation, i ∼ i is true, due to i = σ0(i)and if i ∼ j with j = σp(i) and j ∼ k with k = σq(j) then clearly k =σq(σp(i)) = σp+q(i) such that i ∼ k again. Also let z := ord(σ) ∈ N(also see (i)), then σz = 11 such that (σk)−1 = σ−k = σzσ−k = σz−k.Hence if i ∼ j with j = σk(i) then i = σz−k(j) such that j ∼ i again.And the equivalence class is obviously given to be

[i] = j ∈ X | j ∼ i =σk(i) | k ∈ N

= cyc(σ, i)

If now j ∈ cyc(σ, i), then j is of the form j = σk(i) for some k ∈ N.In particular σ(j) = σk+1(i) ∈ cyc(σ, i) again. And hence we get awell-defined map

ζi : cyc(σ, i) → cyc(σ, i) : a 7→ σ(a)

Once more, as σ is injective and cyc(σ, i) is a finite set we even getσ(cyc(σ, i)) = cyc(σ, i) such that the above ζi bocomes bijective. Wenow expand ζi to 1 . . . n trivially by letting ζi(a) := a for any a 6∈cyc(σ, i). Then it is clear, that

dom(ζi) = cyc(σ, i)

And as ζi acts on cyc(σ, i) =i, σ(i), . . . , σk(i)(i)

just like σ does,

it also is clear, that ζi = (i σ(i) . . . σk(i)(i)). Now, as ∼ restrictsto an equivanlence relation on dom(σ) [for a ∈ fix(σ) the equivalenceclasse is just [a] = a ] we may pick up a system of representantsi(1), . . . , i(r). As the cycles are disjoint cyc(σ, i(p)) ∩ cyc(σ, i(q)) = ∅for p 6= q ∈ 1 . . . r we find from (1.13.(iv)) that the respective cyclescommute

p 6= q ∈ 1 . . . r =⇒ ζi(p)ζi(q) = ζi(q)ζi(p)

Also if a ∈ dom(σ) then there is precisely one p ∈ 1 . . . r such thata ∈ cyc(σ, i(p)) = dom(ζi(p)). In particular we have a ∈ fix(ζi(q)) forany q 6= p. And hence we get r∏

j=1

ζi(j)

(a) = ζi(p)∏q 6=p

ζi(q)(a) = ζi(p)(a) = σ(a)

294 16 Proofs - Fundamentals

Page 295: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Thereby we only used the fact, that we can bring ζi(p) in front bysuccessively interchanging two cycles (which is obvious by inductionon r). And if a ∈ fix(σ) then by consruction we also have a ∈ fix(ζi(p))for any p ∈ 1 . . . r such that (ζi(1) . . . ζi(r))(a) = a = σ(a). In anycase we have found σ(a) = (ζi(1) . . . ζi(r))(a) such that the equalityσ = ζi(1) . . . ζi(r) is finally established, as well.

(ii) Let us abbreviate ζ := ζ1 . . . ζk and ξ := ξ1 . . . ξt. We will prove thisstatement by induction on m := min k, t . For m = 1 we alreadyhave ζ1 = ξ1 such that there is nothing to prove. For m ≥ 2 considerany a ∈ dom(ζ1). By assumption this implies a 6∈ dom(ζi) for anyi 6= 1. Hence we get

ζ(a) := ζ1 . . . ζk(a) = ζ1(a)

In particular a ∈ dom(ζ) = dom(ξ). Likewise there is a uniquelydetermined r ∈ 1 . . . t such that a ∈ dom(ξr) as for any s 6= r we geta 6∈ dom(ξs). Hence - by (ii) - we have ξr(a) ∈ dom(ξr) again suchthat ξr(a) 6∈ dom(ξs) for any s 6= r, as well. This implies

ξ(a) := ξ1 . . . ξt(a) = ξr(a)

Altogether ζ1(a) = ζ(a) = ξ(a) = ξr(a). We will now prove dom(ζ1) =dom(ξr): given any b ∈ dom(ζ1) there is some p ∈ N such that b =ζp1 (a) by virtue of (1.16.(b)), as ζ1 is a cycle. Therefore b = ζp1 (a) =ξpr (a) ∈ dom(ξr) by (ii) again. Altogether this is

dom(ζ1) ⊆ dom(ξr)

Reversing the roles of ζ1 and ξr we find the converse inclusion, suchthat the domains are equal. In particular, given any a ∈ dom(ζ1) wecan repeat the above calculation, to find

∀ a ∈ dom(ζ1) = dom(ξr) : ζ1(a) = ξr(a)

Clearly this implies ζ1 = ξr. Recall that the domains of the ξs wereassumed to be pairwise disjoint, such that by (1.13.(iv)) the ξs aremutually commutative. Then we can compute

ζ2 . . . ζk = ζ−11 ζ = ξ−1

r ξ =∏s 6=r

ξs

Hence by induction hypotheseis we find k − 1 = t − 1 and some per-mutation α such that ζi = ξα(i) for any i ∈ 2 . . . k. Clearly this impliesk = t and we can append α to α ∈ Sr by α(1) := r.

(iii) By (iv) σ can be written as a composition of cycles ζj := ζi(j) withpairwise disjoint domains. And by (v) these cycles are uniquely deter-mined. Hence this item (vi) is an immediate result of (iv) and (v).

(iv) By (iv) any permutation σ ∈ Sn can be written as a composition ofcycles σ = ζ1 . . . ζr. And according to (1.15) any cycle ζi can be written

295

Page 296: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

as a composition of transpositions

ζi =∏j∈J(i)

τi,j

Again, according to (1.15), any transposition τi,j can be written as acomposition of adjacent transpositions

τi,j =∏

k∈K(i,j)

αi,j,k

Altogether we can decompose σ into a - admittably very large - com-position of adjacent cycles αi,j,k by putting all this together.

(v) The implication (b) =⇒ (a) is quite easy: pick up the cyclic decom-position % = ζ1 . . . ζk as in (iii) and let ξi := πζiπ

−1 for any i ∈ 1 . . . r.Then by (1.15) ξi is a cycle of the same length ord(ξi) = ord(ζi) again.And as π is bijective the cycles ξi are pairwise disjoint again. Nowcompute

σ = π%π−1 = πζ1ζ2 . . . ζkπ−1

= πζ1π−1πζ2π

−1 . . . πζkπ−1

= ξ1ξ2 . . . ξk

That is we have found a cyclic decomposition of σ again. And byconstruction this decomposition satisfies the properties of (a). But asthe cyclic decomposition of σ is unique (up to sequence, as we haveseen in (ii)) (a) is true for any cyclic decomposition of σ.

The converse implication (a) =⇒ (b) only requires little more effort:Let us abbreviate `(i) := ord(ζi) the length of ζi. That is there aresome numbers ai,j ∈ 1 . . . n such that ζi can be written as

ζi = (ai,1 ai,2 . . . ai,`(i))

And by assumption we have ord(ξα(i)) = ord(ζi) = `(i) such that therealso are numbers bi,j ∈ 1 . . . n such that ξα(i) can be written as

ξα(i) = (bi,1 bi,2 . . . bi,`(i))

As the domains of the ζi and the ξr are pairwise disjoint we obtain adisjoint union of sets (that compose the domain of %)

D :=

k⋃i=1

ai,j | j ∈ 1 . . . `(i)

And this enables us to define a permutation π ∈ Sn by letting π(ai,j) :=bi,j for any i ∈ 1 . . . k and j ∈ 1 . . . `(i) and π(a) := a for any a 6∈ D.

296 16 Proofs - Fundamentals

Page 297: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Then by (1.15) it is clear that

πζiπ−1 = π(ai,1 ai,2 . . . ai,`(i))π

−1

= (π(ai,1) π(ai,2) . . . π(ai,`(i)))

= (bi,1 bi,2 . . . bi,`(i)) = ξα(i)

Hence we find (reversing the computation in the case (b) =⇒ (a)above) that σ and % are truly conjugates under π

π%π−1 = π

(k∏i=1

ζi

)π−1 =

k∏i=1

(πζiπ−1) =

k∏i=1

ξi = σ

2

Proof of (1.2) (commutativity):We will now prove that in a commutative groupoid (G, ) the order in whichthe elements x1, . . . , xn ∈ G are multiplied has no impact on the result ofthe composition. That is for any permutation σ ∈ Sn we have

xσ(1)xσ(2) . . . xσ(n) = x1x2 . . . xn

As we have seen in (1.17.(iv)) any permutation σ ∈ Sn can be expressed asa composition of adjecent transpositions

σ = α1 α2 . . . αs

We will now prove the claim by induction on the number of transpositionss. In case s = 1 we have to deal with one adjacent transposition, sayα = (j j + 1). Then we have

n∏i=1

xα(i) = x1 . . . xj−1 xj+1 xj xj+2 . . . xn

But by the commutativity rule (C) we can interchange xj+1 and xj in thisproduct to find xα(1)xα(2) . . . xα(n) = x1 . . . xjxj+1 . . . xn. In case s ≥ 2 welet α := α1 and % = α2 . . . αs. Then it is clear that σ = α%. And as in thecase s = 1 resp. by induction hypothesis we get

n∏i=1

xσ(i) =n∏i=1

xα%(i) =n∏i=1

x%(i) =n∏i=1

xi

2

Proof of (1.21):

(i) Let us denote the set ∆ := (i, j) ∈ (1 . . . n)2 | i < j and the mapδ : (1 . . . n)2 → Z : (i, j) 7→ j − i. For p, q ∈ N we let p ∧ q be theminimum and p ∨ q be the maximum of p and q. At last we define

Σ : ∆→ ∆ : (i, j) 7→ (σ(i) ∧ σ(j), σ(i) ∨ σ(j))

297

Page 298: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Thereby Σ is well defined, since 1 ≤ σ(i) ∧ σ(j) < σ(i) ∨ σ(j) ≤ n. Infact Σ even is bijective, as we can explictly give its inverse to be

Σ−1 : ∆→ ∆ : (i, j) 7→ (σ−1(i) ∧ σ−1(j), σ−1(i) ∨ σ−1(j))

As ∆ is finite it suffices to check Σ−1Σ = 11 ∆: let (p, q) := Σ(i, j), thenp and q ∈ σ(i), σ(j) and p < q. Hence σ−1(p) and σ−1(q) ∈ i, j ,so ordering them we get Σ−1(p, q) = (i, j) again, which is the bijectivityof Σ. Also we easily compute

δΣ(i, j) =

σ(j)− σ(i) if σ(j) > σ(i)σ(i)− σ(j) if σ(j) < σ(i)

=(σ(j)− σ(i)

) 1 if σ(j) > σ(i)−1 if σ(j) < σ(i)

After all these preparations we can proceed to the main computation:we use the fact that (due to the commutativity of Z) the product canbe permuted arbitraily without changing its result. Here we have∏

(i,j)∈∆

(j − i) =∏

(i,j)∈∆

δ(i, j) =∏

(p,q)∈Σ(∆)

δ(i, j)

=∏

(i,j)∈∆

Σδ(i, j) = (−1)I(σ)∏

(i,j)∈∆

(σ(j)− σ(i)

)And from here we are almost done, note that the product

∏(j − i)

is non-zero, as we always have i < j. Also 1/(−1)I(σ) = (−1)I(σ) asthis number is either +1 or −1. Hence dividing by the product and(−1)I(σ) we finally get

(−1)I(σ) =

∏(i,j)∈∆

(σ(j)− σ(i)

)∏(i,j)∈∆(j − i)

=∏

(i,j)∈∆

σ(j)− σ(i)

j − i

(iv) Let us continue with the notation introduced in (i) above. Then wehave already shown the following identity for any permutation σ

(−1)I(σ)∏

(i,j)∈∆

(j − i) =∏

(i,j)∈∆

(σ(j)− σ(i)

)Using this identity for the composition %σ and the permutations σ and% seperately we can again compute

(−1)I(%σ)∏

(i,j)∈∆

(j − i) =∏

(i,j)∈∆

(%σ(j)− %σ(i)

)= (−1)I(σ)

∏(i,j)∈∆

(%(j)− %(i)

)= (−1)I(σ)(−1)I(%)

∏(i,j)∈∆

(j − i)

Hence dividing by the product∏

(j − i) we have established the ho-

298 16 Proofs - Fundamentals

Page 299: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

morphism property of the signum, expressed in terms of (i):

sgn(%σ) = (−1)I(%σ) = (−1)I(σ)(−1)I(%)

= (−1)I(%)(−1)I(%) = sgn(%) sgn(σ)

(iii) Consider the permutation τ = (a b) where (without loss of generality)a < b. Then it is easy to see, that τ has the following inversions

(i, j) ∈ ∆ | τ(i) > τ(j) = (a, a+ s) | 1 ≤ s ≤ b− a ∪ (a+ r, b) | 1 ≤ r < b− a

In particular we get I(τ) = (b − a) + (b − a − 1) = 2(b − a) − 1 andthereby we get - using variant (i) of computing the signum

sgn(τ) = (−1)I(τ) = (−1)2(b−a)−1 =((−1)2

)b−a(−1)−1 = −1

That is any transposition is an odd permutation! Thus, if σ = τ1 . . . τris a composition of r transpositions, then the homorphism property(iv) of the signum implies

sgn(σ) =

r∏i=1

sgn(τi) =

r∏i=1

(−1) = (−1)r

(ii) By (1.15) we know that any cycle ζi can be written as a compositionof `(i)− 1 transpositions. Hence - by (iv) combined with (iii)

sgn(σ) = sgn(ζ1 . . . ζr) =r∏i=1

sgn(ζi) =r∏i=1

(−1)`(i)−1

And as the domains of the ζi are pairwise disjoint (1.13.(iv)) implies,that they are mutually commutative. Hence by (1.5) we find that anypower k ∈ N of σ is just

σk =r∏i=1

ζki

Also as dom(ζki ) ⊆ dom(ζi) (this is (1.13.(ii))) and the dom(ζi) arepairwise disjoint we have (for any a ∈ 1 . . . n) σ(a) = a if and only ifζki (a) = a for any i ∈ 1 . . . r. In particular we see the equivalency of

σk = 11 ⇐⇒ ∀ i ∈ 1 . . . r : ζki = 11

As ζi is a cycle its order is given to be `(i) = ord(υi). Hence ζki = 11if and only if `(i) divides k (by division with remainder again, write

k = m`(i) + r then ζki = (ζ`(i)i )mζri = ζri = 11 iff r = 0). Therefore

σk = 11 ⇐⇒ ∀ i ∈ 1 . . . r : `(i) | k

Now ord(σ) = k is the minimal number 1 ≤ k ∈ N such that σk = 11 .

299

Page 300: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And likewise lcm(`(1), . . . , `(r)) is the minimal number that is divided,by all the `(i). In particular both numbers are equal.

(v) By definition An is just the kernel of the group-homorphism sgn :Sn → Z∗. Therefore the first isomorphism theorem (of groups) yields

Sn/An

∼→ Z∗ : σAn 7→ sgn(σ)

In particular these two sets have the same cardinality. Now use thetheorem of Lagrange and (1.12.(i)) to compute

2 = #Z∗ = #Sn/

#An=

#Sn#An

=n!

#An

2

Proof of (9.3):

(i) Clearly for any x ∈ X we have x ∼ x, since x = ex. Also if x ∼ y,say y = gx for some g ∈ G, then x = ex = (g−1g)x = g−1(gx) = g−1ysuch that we also have y ∼ x. And if also y ∼ z, say z = hy for someh ∈ G, then we also have z = hy = h(gx) = (hg)x such that x ∼ z.Altogether ∼ is an equivalency relation on X. And for x ∈ X theequivalency class of x is the orbit Gx, since

[x] = y ∈ X | x ∼ y = y ∈ X | ∃ g ∈ G : y = gx = gx | g ∈ G = Gx

(ii) We will first prove that the stabilizer Gx = g ∈ G | gx = x is asubgroup of G. First of all e ∈ Gx since ex = x. Also if g, h ∈ Gx thengx = x and hx = x such that (gh)x = g(hx) = gx = x, and this meansgh ∈ Gx again. Finally g ∈ Gx implies x = ex = (g−1g)x = g−1(gx) =g−1x such that g−1 ∈ Gx again. Altogether Gx is a subgroup of G. Itremains to prove the well-definedness and bijectivity of

Gx→ G/Gx : gx 7→ gGx

Let us first check the well-definedness: if gx = hx then (h−1g)x =h−1(gx) = h−1(hx) = (h−1h)x = ex = x such that h−1g ∈ Gx.And this again is gGx = hGx by definition of G/P . Now the sur-jectivity is clear: for g ∈ G then gx ∈ Gx is mapped to gGx. Andfor the injectivity suppose we had some g, h ∈ G with gGx = hGx.Undoing the above computation we then have h−1g ∈ Gx such thatgx = e(gx) = (hh−1)gx = h((h−1g)x) = hx again.

(iii) By definition we have gA ⊆ A for any g ∈ G. But this impliesA = eA = (gg−1)A = g(g−1A) ⊆ gA ⊆ A such that gA = A.

(iv) Given g ∈ G and x ∈ X \A we have to show gx ∈ X \A. Thus supposegx ∈ A, then x = ex = (g−1g)x = g−1(gx) ∈ g−1A ⊆ A.

300 16 Proofs - Fundamentals

Page 301: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) By (ii) we have #Gx = #(G/Gx) and (as both, G and X are finite)by the theorem of Lagrange (1.7) this implies

#Gx = [G : Gx] =#G

#Gx

Now the second orbit formula is a simple consequence: as ∼ is anequivalency relation the orbits Gx form a partition of X and hence

#X =∑

Gx∈X/G

#Gx = #G ·∑

Gx∈X/G

1

#Gx

(vi) The basic idea of this proof is the following identity of the disjointunions of the stabilizers Gx = g ∈ G | gx = x and the fixed pointsets Xg = x ∈ X | gx = x . Thus taking to the unions we find⋃g∈G g ×Xg = (g, x) ∈ G×X | gx = x =

⋃x∈X

Gx × x

And as these unions are disjoint (by construction) we hence alreadyhave proved the second part of Burnsides formula∑

g∈G#Xg =

∑x∈X

#Gx

For the first part we use the partition of X into orbits Gx again, thatis X =

⋃Gx where the union runs over all orbits Gx ∈ X/G. This

can be used to split the sum over all x ∈ X into two seperate sumsthe first over Gx ∈ X/G the second over y ∈ Gx∑

x∈X#Gx =

∑Gx∈X/G

∑y∈Gx

#Gx =∑

Gx∈X/G

#Gx ·#Gx

But by the orbit fomula in (iv) we have #Gx ·#Gx = #G identically,such that this sum is reduced once again, leaving only∑

x∈X#Gx =

∑Gx∈X/G

#G = #G ·#X/G

2

Proof of (7.5):

(i) Recall the definition of the minimum and maximum of any two ele-ments α, β ∈ A (this is truly well-defined, since ≤ is a linear order)

α ∧ β :=

α if α ≤ ββ if β ≤ α α ∨ β :=

β if α ≤ αβ if β ≤ α

In the case that M = µ contains one element only, we clearly getthe minimal and maximal elements µ∗ = µ = µ∗. If now M is given

301

Page 302: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

to be M = µ1, . . . , µn where n ≥ 2 then we regard

µ∗ := (. . . (µ1 ∧ µ2) . . . ) ∧ µnµ∗ := (. . . (µ1 ∨ µ2) . . . ) ∨ µn

Then µ∗ and µ∗ are minimal, resp. maximal elements of M , which canbe seen by induction on n: if n = 2 then µ∗ is minimal by construction.Thus for n ≥ 3 we let H := µ1, . . . , µn−1 ⊆ M . By inductionhypothesis we have H∗ = ν∗ for ν∗ = ((µ1 ∧ µ2) . . . ) ∧ µn−1. Nowlet µ∗ := ν∗ ∧ µn then µ∗ ≤ ν∗ ≤ µi for i < n and µ∗ ≤ µn byconstruction. Hence we have µ∗ ≤ µi for any i ∈ 1 . . . n, which meansµ∗ ∈ M∗. And the uniqueness is obvious by the anti-symmetry of ≤.And this also proves the the claim for the maximal element µ∗, bytaking to the inverse order α ≥ β :⇐⇒ β ≤ α.

(ii) The existence of µ∗ is precisely the property of a well-ordering. Theuniqueness is immediate from the anti-symmetry of ≤ again: supposeµ1 and µ2 both are minimal elements of M , in particular µ1 ≤ µ2 andµ2 ≤ µ1. Ant this implies µ1 = µ2, as ≤ is a (linear) order on A.

(iii) In the direction (a) =⇒ (b) we consider α < β. By the positivity of≤ this implies α+γ ≤ β+γ. If the equality α+γ = β+γ would holdtrue then - as A is integral - we wolud find α = β in contradiction tothe assumption α < β. In the converse direction (b) =⇒ (a) we aregiven α ≤ β. The positivity α + γ ≤ β + γ (for any γ ∈ A) of ≤ isclear in this case. Now asume α + γ = β + γ then we need to showα = β to see that A is integral. But as ≤ is total we have α ≤ β orβ ≤ α, where we may assume α ≤ β without loss of generality. If wenow had α 6= β then α < β and hence α+ γ < β + γ in contradictionto α+γ = β+γ, which rests this case, too. Now (c) =⇒ (b) is trivialsuch that there only remains the direction (b) =⇒ (c). But this isclear - assume α+ γ < β + γ but α ≥ β, then by the positivity (a) wewould get the contradiction: α+ γ ≥ β + γ.

(iv) Assume µ 6= µ∗, then by the minimality of µ∗ this would yield µ∗ < µand hence (by property (b) in point (iv) above)

µ+ ν = µ∗ + ν∗ < µ+ ν∗

Thus we see that ν 6= ν∗ and hence (by the minimality of ν∗) we getν∗ < ν again. Using (b) once more this yields a contradiction

µ+ ν < µ+ ν∗ < µ+ ν

Thus we have seen µ = µ∗ but in complete analogy we find that ν 6= ν∗leads to another contradiction and hence ν = ν∗.

2

Proof of (2.1):

(v) As ≤ is a total order, we may assume a ≤ b without loss of generality.If a, b ∈ A∗, then b ∈ A, a ≤ b and a ∈ A∗ implies a = b. Likewise ifa, b ∈ A∗ then a ∈ A, a ≤ b and a ∈ A∗ implies a = b.

302 16 Proofs - Fundamentals

Page 303: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vi) By induction on n: if n = 1 then a∗ = a1 is trivial and if n = 2then a∗ is minimal by construction. Thus for n ≥ 3 we let H := a1, . . . , an−1 ⊆ A. By induction hypothesis we have H∗ = h∗ for h∗ = ((a1∧a2) . . . )∧an−1. Now let a∗ := h∗∧an then a∗ ≤ h∗ ≤ aifor i < n and a∗ ≤ an by construction. Hence we have a∗ ≤ ai for anyi ∈ 1 . . . n, which means a∗ ∈ A∗. And the uniqueness has already beenshown above. This also proves the like statement for A∗ by taking tothe inverse ordering b ≥ a :⇐⇒ a ≤ b.

(ix) Regard Z as a partially ordered set (Z, ⊆ ) under the inclusion relation(this is allowed, due to (1)). If now C ⊆ Z is a chain, then by (2) weget U :=

⋃C ∈ Z. And clearly we have C ⊆ U for any C ∈ C. Thus

U is an upper bound of C and hence we may apply the lemma of Zornto (Z, ⊆ ), to find a maximal element Z ∈ Z∗.

(x) Let us define the partial order B ≤ A :⇐⇒ A ⊆ B on Z. If nowC ⊆ Z is a chain, then by (2) we get U :=

⋂C ∈ Z. And clearly

U ⊆ C (which translates into C ≤ U) for any C ∈ C. Thus U is anupper bound of C and hence we may apply the lemma of Zorn to find a≤-maximal element Z. But being ≤-maximal trivially translates intobeing ⊆ -minimal again.

2

Proof of (7.6):

• In the following we will regard arbitrary elements α = (αi), β = (βi)and γ = (γi) ∈ A := N⊕I . In a first step we will show that - assuming(I,≤) is a linearly ordered set - the lexicographic order is a positivepartial order on A. The reflexivity α ≤lex α is trivial, however, andfor the transitivity we are given α ≤lex β and β ≤lex γ and need toshow α ≤lex γ. The case where two (or more) of α, β and γ are equalis trivial and hence the assumption reads as

αk < βk and ∀ i < k : αi = βi

βl < γl and ∀ i < l : βi = γi

for some k, l ∈ I. We now let m := min k, l then it is easy to see,that ∀ i < m : αi = βi = γi and αm < γm which establishes α ≤lex γ.It remains to prove the anti-symmetry, i.e. we are given α ≤lex β andβ ≤lex α and need to show α = β. Suppose we had α 6= β then again

αk < βk and ∀ i < k : αi = βi

βl < αl and ∀ i < l : βi = αi

In the case k ≤ l this implies αk < βk = αk and in the case l ≤ k weget βl < αl = βl. In both cases this is a contradiction. Hence ≤lex isa partial order, but the positivety is easy. Assume α ≤lex β, then weneed to show α+ γ ≤lex β+ γ. If even α = β, then there is nothing toprove. Hence we may assume, that there is some k ∈ I such that

αk < βk and ∀ i < k : αi = βi

303

Page 304: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

But then α+ γ ≤lex β + γ is clear, as from this we immediately get

αk + γk < βk + γk and ∀ i < k : αi + γi = βi + γi

• In a second step we will show that - assuming (I,≤) is a well-orderedset - the lexicographic order truly is a positive linear order on A. Bythe first step it only remains to verify that ≤lex is total, i.e. we assumethat β ≤lex α is untrue and need to show α ≤lex β. By assupmtion wenow have for any k ∈ I

βk ≥ αk or ∃ i < k : αi 6= βi

It is clear that α 6= β and as (I,≤) is well-ordered we may choose

k := min l ∈ I | αl 6= βl

Then ∀ i < k we get αi = βi (as k was chosen minimally) and βk ≥ αk(by assumption). Yet βk = αk is false by construction which onlyleaves βk > αk and hence establishes α ≤lex β.

• In the third step we assume that I is finite, i.e. I = 1 . . . n without lossof generality. Note that I clearly is well-ordered under its standardordering and hence it only remains to prove the following assertion:let ∅ 6= M ⊆ A be a non-empty subset, then there is some µ ∈ Msuch that for any α ∈M we get µ ≤lex α. To do this we define

µ1 := minα1 ∈ N | α ∈M µ2 := minα2 ∈ N | α ∈M, α1 = µ1 . . . . . .

µn := minαn ∈ N | α ∈M, α1 = µ1, . . . , αn−1 = µn−1

Hereby the set α1 | α ∈M is non-empty, since M 6= ∅ was assumedto be non-empty. And α2 | α ∈M,α1 = µ1 is non-empty, as therewas some α ∈ M with α1 = µ1 and so forth. Hence µ is well definedand it is clear that µ ∈M , as µ = α for some α ∈M . But the propertyµ ≤lex α is then clear from the construction.

• Now we will proof, that - assuming (I,≤) is a linearly ordered set -the ω-graded lexicographic order is a positive partial order as well. Itis clear that ≤ω inherits the porperty of being a partial order from≤lex so it only remains to verify the positivity: let α ≤ω β, then wedistinguish two cases. In the case |α|ω < |β|ω we clearly get

|α+ γ|ω = |α|ω + |γ|ω < |β|ω + |γ|ω = |β + γ|ω

If on the other hand |α|ω = |β|ω then by assumption α ≤lex β and aswe have seen already this implies α+γ ≤lex β+γ. Thus in both caseswe have found α+ γ ≤ω β + γ.

• We now assume that (I,≤) even is a well-ordered set and prove that≤ω is a total order in this case. Hence we need to regard the case thatβ ≤ω α is untrue. Of course this implies |α|ω ≤ |β|ω and in the case

304 16 Proofs - Fundamentals

Page 305: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

|α|ω < |β|ω we are done already. Thus we may assume |α|ω = |β|ω,but in this case the assumption reads as: ¬ β ≤lex α. As we haveseen above the lexicographic order is total and hence α <lex β whichconversely implies α <ω β.

• In the final step we assume that I is finie, i.e. I = 1 . . . n and we needto show that ≤ω is a well-ordering. Thus let ∅ 6= M ⊆ A be a non-empty subset again, then we have to verify that there is some µ ∈Msuch that for any α ∈M we get µ <ω α. To do this we define

m := min |α|ω | α ∈M µ := minα | α ∈M, |α|ω = m

where the second minimum is taken under the lexicographic order. AsM is non-empty m is well-defined and as ≤lex has been shown to be awell-ordering µ is well-defined, too. But µ ∈ M is trivial and µ ≤ω αis clear from the construction.

2

305

Page 306: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 17

Proofs - Rings and Modules

Proof of (1.37):

• The statement 0 = −0 is clear from 0 + 0 = 0. And a0 = 0 followsfrom a0 = a(0 + 0) = a0 + a0. Likewise we get 0a = 0. And from thisan easy computation shows (−a)b = −(ab), this computation readsas ab + (−a)b = (a + (−a))b = 0b = 0 (and a(−b) = −(ab) followsanalogously). Combining these two we find (−a)(−b) = ab, sinceab = −(−ab) = −((−a)b) = (−a)(−b).

• Next we will prove the general rule of distributivity: we start by show-ing that (a1 + · · ·+am)b = (a1b)+ · · ·+(amb) by induction on m. Thecase m = 1 is clear and in the induction step we simply compute(

a1 + · · ·+ am + am+1

)b =

((a1 + · · ·+ am) + am+1

)b

=

(m∑i=1

ai

)b+ am+1b =

(m∑i=1

aib

)+ am+1b =

m+1∑i=1

aib

Likewise it is clear, that a(b1 + · · · + bn) = (ab1) + · · · + (abn). Andcombining these two equations we find the generale rule(

m∑i=1

ai

) n∑j=1

bj

=

n∑j=1

(m∑i=1

ai

)bj

=n∑j=1

(m∑i=1

aibj

)

=

m∑i=1

n∑j=1

aibj

• Using induction on n we will now prove the generalisation of the gen-eral law of distributivity. The case n = 1 is trivial (here J = J(1) sothere is nothing to prove). Thus consider the induction step by adding

306

Page 307: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

J(0) to our list J(1), . . . , J(n) of index sets. Then clearly

n∏i=0

∑ji∈J(i)

a(i, ji) =

∑j0∈J(0)

a(0, j0)

n∏i=1

∑ji∈J(i)

a(i, ji)

Hence we may use the induction hypothesis in the second term on theright hand side and the general law of distributivity (note that we useJ = J(1)× · · · × J(n)) again) to obtain

n∏i=0

∑ji∈J(i)

a(i, ji) =

∑j0∈J(0)

a(0, j0)

∑j∈J

n∏i=1

ai,ji

=

∑j0∈J(0)

∑j∈J

a(0, j0)n∏i=1

a(i, ji)

To finish the induction step it suffices to note that the product ofa(0, j0) with

∏ni=1 a(i, ji) can be rewritten

∏ni=0 a(i, ji) (due to the

associativity of the multiplication). And the two sums over j(0) ∈ J(0)resp. over j ∈ J can be composed to a single sum over (j0, j) ∈ J(0)×J .Thas is we have obtained the claim

n∏i=0

∑ji∈J(i)

a(i, ji) =∑

(j0,j)∈J(0)×J

n∏i=0

a(i, ji)

• Thus we have arrived at the binomial rule, which will be proved byinduction on n. In the case n = 0 we have (a+b)0 = 1 = 1 ·1 = a0b0−0.Thus we commence with the induction step

(a+ b)n+1 = (a+ b)n(a+ b)

=

(n∑k=0

(n

k

)akbn−k

)(a+ b)

=

n∑k=0

(n

k

)ak+1bn−k +

n∑k=0

(n

k

)akb(n+1)−k

=

n+1∑k=1

(n

k − 1

)akb(n+1)−k +

n∑k=0

(n

k

)akb(n+1)−k

=

(n

n

)an+1b0 +

n∑k=1

((n

k − 1

)+

(n

k

))akb(n+1)−k +

(n

0

)a0bn+1

=

(n+ 1

0

)a0bn+1 +

n∑k=1

(n+ 1

k

)akb(n+1)−k +

(n+ 1

n+ 1

)an+1b0

=n+1∑k=0

(n

k

)akbn−k

• It remains to verify the polynomial rule, which will be done by induc-tion on k. The case k = 1 is clear again, and the case k = 2 is the

307

Page 308: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

ordinary binomial rule. Thus consider the induction step(a1 + · · ·+ ak + ak+1

)n=

(ak+1 + (a1 + · · ·+ ak)

)n=

n∑αk+1=0

(n

αk+1

)aαk+1

k+1 (a1 + · · ·+ ak)n−αk+1

=

n∑αk+1=0

(n

αk+1

)aαk+1

k+1

∑|α|=n−αk+1

(n− αk+1

α

)aα

=

n∑αk+1=0

∑|α|=n−αk+1

(n

αk+1

)(n− αk+1

α

)aαa

αk+1

k+1

=∑

|(α,αk+1)|=n

(n

(α, αk+1)

)(a, ak+1)(α,αk+1)

2

308 17 Proofs - Rings and Modules

Page 309: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (1.43):

• The equality of sets nzdR = R \ zdR is immediately clear from thedefinition: the negation of the formula ∃ b ∈ R : (b 6= 0) ∧ (ab = 0) isjust ∀ b ∈ R : ab = 0 =⇒ b = 0.

• If a ∈ nilR then we choose k ∈ N minimal such that ak = 0. Supposek = 0, then 0 = a0 = 1 which would mean that R = 0 has been thezero-ring. Thus we have k ≥ 1 and may hence define b := ak−1 6= 0.Then ab = ak = 0 and hence a ∈ zdR, which proves nilR ⊆ zdR.And if ab = 0, then a ∈ Rast would imply b = a−10 = 0. Thus ifa ∈ zdR then a 6∈ R∗ which also proves zdR ⊆ R \R∗.

• It is clear that 1 ∈ nzdR, as 1b = b. Now suppose a and b ∈ nzdRand consider any c ∈ R. If (ab)c = 0 then (because of the associativitya(bc) = 0 and as a ∈ nzdR this implies bc = 0. As b ∈ nzdR this nowimplies c = 0 which means ab ∈ nzdR.

• It is clear that 1 ∈ R∗ - just choose b = 1. Now the associativity andexistence of a neutral element e = 1 is immediately inherited from(the multiplication) of R. Thus consider a ∈ R∗, by definition thereis some b ∈ R such that ab = 1 = ba. Hence we also have b ∈ R∗ andthereby b is the inverse element of a in R∗.

• ”⇐= ” first suppose 0 = 1 ∈ R then we would have R = 0 and henceR \ 0 = ∅ and R∗ = R 6= ∅, in contradiction to the assumptionR∗ = R \ 0. Thus we get 0 6= 0. If now 0 6= a ∈ R then a ∈ R∗ byassumption and hence there is some b ∈ R such that ab = 1 = ba. Butthis also is porperty (F) of skew-fields. ” =⇒ ” if 0 6= a ∈ R then -as R is a skew field - there is some b ∈ R such that ab = 1 = ba. Butthis already is a ∈ R∗ and hence R \ 0 ⊆ R∗. Conversely considera ∈ R∗, that is ab = 1 = ba for some b ∈ R. Suppose a = 0, then1 = ab = 0 · b = 0 and hence 0 = 1, a contradiction. Thus a 6= 0 andthis also proves R∗ ⊆ R \ 0.

• Consider a and b ∈ nilR such that ak = 0 and bl = 0. Then we willprove a+ b ∈ nilR by demonstrating (a+ b)k+l = 0, via

(a+ b)k+l =k+l∑i=0

(k + l

i

)aibk+l−i

=

k∑i=0

(k + l

i

)aibl+(k−i) +

l∑j=1

(k + l

k + j

)ak+jbl−j

= blk∑i=0

(k + l

i

)aibk−i + ak

l∑j=1

(k + l

k + j

)ajbl−j

= 0 + 0 = 0

And if c ∈ R is an arbitrary element, then also (ca)k = ckak = ck0 = 0such that ca ∈ nilR. As a has been chosen arbitarily, this impliesR (nilR) ⊆ R and hence nilR i R is an ideal.

• Consider x and y ∈ ann(R, b). Then it is clear that (x+y)b = xb+yb =0 + 0 = 0 and hence x+ y ∈ ann(R, b). And if a ∈ R is arbitrary then

309

Page 310: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ax)b = a(bx) = a0 = 0 such that ax ∈ ann(R, b). In particular−x = (−1)x ∈ ann(R, b) and hence ann(R, b) is a submodule of R.

• Let u ∈ R∗ and a ∈ R with an = 0, then we can verify the formula forthe inverse of u+ a in straightforward computation (see below). Andin particular we found u + a ∈ R∗. But as a has been an arbitrarynilpotent, this has truly been u+ nilR ⊆ R∗.

(a+ u)n−1∑k=0

(−1)kaku−k−1

=

n−1∑k=0

(−1)kak+1u−(k+1) +

n−1∑k=0

(−1)kaku−k

=n∑k=1

(−1)k−1aku−kn−1∑k=0

(−1)kakun−(k−1)

= (−1)n−1anu−n + (−1)0a0u−0 = 1

2

Proof of (1.41):We now prove of the equivalencies for integral rings: for (a) =⇒ (b’) weare given a, b and c ∈ R with a 6= 0 and ba = ca. Then we get (c− b)a = 0which is c − b ∈ zdR. By assumption (a) this implies c − b = 0 and henceb = c. Now the implication (b’) =⇒ (c’) is trivial, just let c = 1. Andfor the final step (c’) =⇒ (a) we consider 0 6= b ∈ R such that ab = 0.Then (a + 1)b = ab + b = b such that by assumption (c’) we get a + 1 = 1and hence a = 0. Thereby we have obtained zdR ⊆ 0 and the converseinclusion is clear.

Next we consider (a) =⇒ (b), again we are given a, b and c ∈ R witha 6= 0 and ab = ac. This yields a(c− b) = 0 and as a 6= 0 we have a 6∈ zdRsuch that a ∈ nzdR. Hence c − b = 0 which is b = c. The implication (b)=⇒ (c) is clear again, just let c = 1. And for (c) =⇒ (a) we consider0 6= a ∈ R such that ab = 0. Then a(b+ 1) = ab+a = a. By assumption (c)this implies b + 1 = 1 and hence b = 0. Therefore R \ 0 ⊆ nzdR whichis R \ 0 = nzdR and hence zdR = 0 .

2

Proof of (1.40):Let us regard the polynomial ring R = Z[x, y] in two variables x and y. AsR is commutative it is clear, that (x + y)n = (x + y)k(x + y)n−k. By thebinomial rule (1.37) we may evaluate these terms to

n∑r=0

(n

r

)xryn−r =

(k∑i=0

(k

i

)xiyk−i

)n−k∑j=0

(n− kj

)xjyn−k−j

=

n∑r=0

∑i+j=r

(k

i

)(n− kj

)xi+jyk−i+n−k−j

=n∑r=0

∑i+j=r

(k

i

)(n− kj

)xryn−r

310 17 Proofs - Rings and Modules

Page 311: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

where we also used the general rule of distributivity. Now we may comparecoefficients (which is allowed by construction of R) to find our claim (asj = r − i) (

n

r

)=

∑i+j=r

(k

i

)(n− kj

)=

r∑i=0

(k

i

)(n− kr − i

)

2

Proof of (4.7):

(i) It is clear that matm,n(R) is an R-module, as a matrix in nothing buta peculiar way of arranging an mn-tuple A ∈ Rmn. If this doesn’tconvince you try the following reasoning: M := Rm is a R-module un-der the pointwise operations. Hence matm,n(R) = Mn is an R-moduleunder the pointwise operations, too. But the pointwise operations areprecisely what we have taken for the addition and scalar-multiplicationof matrices. And the fact that E is an R-basis follows immediately from(3.39.(vi)) once we notice, that any matrix A = (ai,j) can be writtenuniquely, as

A =m∑i=1

n∑j=1

ai,j Ei,j

(ii) We will first prove the associativity of the matrix multiplication. Todo this fix any index i ∈ 1 . . .m and l ∈ 1 . . . q and compute

enti,l((AB)C) =

p∑k=1

enti,k(AB) entk,l(C)

=

p∑k=1

n∑j=1

enti,j(A) entj,k(B) entk,l(C)

=n∑j=1

enti,j(A)

p∑k=1

entj,k(B) entk,l(C)

=

n∑j=1

enti,j(A) entj,l(BC)

= enti,l(A(BC))

The distributivity can be proved by a direct computation as well. In

311

Page 312: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

this case we prefer to e4mploy the scalar product however:

enti,k(A(B + C)) = 〈rowi(A), colk(B + C)〉= 〈rowi(A), colk(B) + colk(C)〉= 〈rowi(A), colk(B)〉+ 〈rowi(A), colk(C)〉= enti,k(AB) + enti,k(AC)

= enti,k(AB +AC)

enti,k((A+B)C)) = 〈rowi(A+B), colk(C)〉= 〈rowi(A) + rowi(B), colk(C)〉= 〈rowi(A), colk(C)〉+ 〈rowi(B), colk(C)〉= enti,k(AC) + enti,k(BC)

= enti,k(AC +AC)

(iii) As matn is an R-module by (i) and the multiplication is associative anddistributive by (ii) we already know that matn(R) is an R-semialgebra.It remains to prove that 11 truly is a unit element in matn(R). To seethis consider any A ∈ matn(R) and compute

enti,k( 11 A) =n∑j=1

enti,j( 11 ) entj,k(A)

=n∑j=1

δi,j entj,k(A) = enti,k(A)

enti,k(A 11 ) =

n∑j=1

enti,j(A) entj,k( 11 )

=

n∑j=1

enti,j(A) δj,k = enti,k(A)

(iv) We will first prove cen(R) 11 ⊆ cen(matn(R)), that is we consider anya ∈ cen(R) and B = (bi,j) ∈ matn(R). Then we have to check that(a 11 )B = B(a 11 ). But as a commutes with all other elements of R thisis quite easy to see:

(a 11 )B = a( 11B) = aB = (abi,j) = (bi,ja) = B(a 11 )

Conversely consider any A = (ai,j) ∈ cen(matn(R)), that is A is amatrix, that commutes with all other matrices. In particular we haveAEr,s = Er,sA for any r, s ∈ 1 . . . n. Now evaluate the (i, k)th entry

enti,k(AEr,s) =n∑j=1

ai,j δj,r δk,s = ai,r δk,s

enti,k(Er,sA) =n∑j=1

δi,r δj,s aj,k = δi,r as,k

That is for any i, k, r and s ∈ 1 . . . n we have ai,rδk,s = δi,ras,k. Let

312 17 Proofs - Rings and Modules

Page 313: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

us choose s := k then this equality specializes to

ai,r = δi,r ak,k

In particular - by taking r := i - we find ai,i = ak,k for any indices i,k. And if r 6= i, then we have ai,r = 0. That is A is a diagonal matrix,in which all diagonal elements are equal. In short: A is of the formA = a 11 for some a ∈ R. Now take B := b 11 , then clearly AB = ab 11and BA = ba 11 . But as AB = BA and b ∈ R is arbitrary, this impliesa ∈ cen(R). Hence we also have cen(matn(R)) ⊆ cen(R) 11 .

(v) It is clear, that the transposition A 7→ A∗ is bijective - the inverse mapis the transposition matn,m(R) → matm,n(R) : B 7→ B∗ again. Andthe R-linearity of the transposition is straightforward again

entj,i((aA+B)∗) = enti,j(aA+B) = a enti,j(A) + enti,j(B)

= a entj,i(A∗) + entj,i(B

∗) = entj,i(aA∗ +B∗)

(vi) Recall that R is assumed to be commutative here. Let us compare allthe entries of the matrices (AB)∗ and B∗A∗, i ∈ 1 . . . l and k ∈ 1 . . . n.Then by definition of the matrix-multiplication we find that all entriesare equal, just compute

entk,i(B∗A∗) = 〈rowk(B

∗), coli(A∗)〉 = 〈colk(B), rowi(A)〉

= 〈rowi(A), colk(B)〉 = enti,k(AB)

= entk,i((AB)∗)

(vii) Let us first assume that A is invertible, then an easy computationyields 11 n = 11 ∗n = (AA−1)∗ = (A−1)∗A∗ where the latter equality isdue to (vi). Likewise 11 n = (A−1A)∗ = A∗(A−1)∗. Hence we have seenthat in this case A∗ is invertible, too and that the inverse is given tobe (A−1)∗. That is we have proved the equality

(A∗)−1 =(A−1

)∗If conversely A∗ is invertible, then we have just seen that (A∗)∗ = Ais invertible. So this already establishes the equivalency of the invert-ibilities of A and A∗.

(viii) Let us denote x = (x1, . . . , xn) ∈ Rn and y = (y1, . . . , ym) ∈ Rm, thenit is straightforward to compute

〈Ax, y〉 =

m∑i=1

[Ax]i yi =

m∑i=1

n∑j=1

enti,j(A)xj yi

=

n∑j=1

xj

m∑i=1

enti,j(A) yi =

n∑j=1

xj

m∑i=1

entj,i(A∗) yi

=

n∑j=1

xj [A∗y]j = 〈x,A∗y〉

313

Page 314: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (4.9):The definiton of gln(R) is precisely that of the multiplicative group ofmatn(R), hence there is nothing to prove.

It is clear that the unit element 11 n of matn(R) is contained in trin(R).Also if A and B ∈ trin(R) are any two upper triangular matrices, then forany a ∈ R and any i, j ∈ 1 . . . n we have

enti,j(aA+B) = a enti,j(A) + enti,j(B)

In particular if i < j we see that enti,j(aA+B) = 0, such that aA+B is anupper triangular matrix again. By taking a = 1 we see that A+B ∈ trin(R)and by taking a = −1 and B = 0 we also have −A ∈ trin(A). That is wehave proved, that (trin(R),+) is a subgroup of (matn(R),+) - in fact wehave shown that trin(R) is an R-submodule of matn(R).

It remains to prove that trin(R) is closed under multiplication of matricesas well. Hence consider any two upper triangular matrices A = (ai,j) andB = (bi,j) ∈ trin(R) again. then for any i, k ∈ 1 . . . n we have

enti,k(AB) =

n∑j=1

ai,j bj,k

But since A ∈ trin(R) we have ai,j = 0 for any j > i. And as B ∈ trin(B) aswell, we also have bj,k = 0 of any j < k. Hence the only summands whichmight be non-zero are those from j = k to j = i. That is the sum breaksdown to

enti,k(AB) =

i∑j=k

ai,j bj,k

Hence if i < k there are no non-zero elements left at all and the sum is 0.But this precisely means that AB ∈ trin(R) is an upper triangular matrixagain, and this had to be shown.

2

Proof of (4.12):

(i) Let us denote A = (ai,j) ∈ matm,n(R) and B = bi,j ∈ matn,m(R). Bydefinition we have Er,s = (δi,rδj,s), hence - for any i, k ∈ 1 . . . n wemay simply compute the matrix multiplication

enti,k(AEr,s) =n∑j=1

ai,jδj,rδk,s = ai,rδk,s

That is the (i, j)-th entry of AEr,s is zero, if k 6= s and if k = s, thenit simply is ai,r. For the k-th column this implies

colk(AEr,s) = δk,s (a1,r, . . . , ai,r) = δk,s colr(A)

314 17 Proofs - Rings and Modules

Page 315: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Likewise we compute the (i, k)-th entry of Er,sB and use this to eval-uate the i-th row of this matrix. This time we get

enti,k(Er,sB) =

n∑j=1

δi,rδj,sbj,k = δi,rbs,k

rowi(Er,sB) = δi,r (bs,1, . . . , bs,n) = δi,r rows(B)

(ii) This identity is a straightforward computation - just be aware thatδi,j commutes with any other scalar of R and that the sum

∑j δi,jxj

breaks down to xi. Then we pick up any p, s ∈ 1 . . . n and compute

entp,s(Ei,j AEk,l) =∑r=1

entp,r(Ei,j A) entr,s(Ek,l)

=∑r=1

n∑q=1

entp,q(Ei,j) aq,r entr,s(Ek,l)

=∑r=1

n∑q=1

δi,p δj,q aq,r δk,r δl,s

= δi,p aj,k δl,s = aj,k entp,s(Ei,l)

(iii) This is an ultimate consequence of (i): clearly Tr,sB = ( 11 n+aEr,s)B =B + aEr,sB. Hence the i-th row of Tr,s(a)B is given to be

rowi(Tr,s(a)B) = rowi(B) + a colk(Er,sB)

= rowi(B) + δk,s a rowi(B)

Likewise we have ATr,s = A( 11 n + aEr,s) = A + (Aa)Er,s. And hencethe k-th column of ATr,s(a) is given to be

colk(ATr,s(a)) = colk(A) + colk((Aa)Er,s)

= colk(A) + δk,s colr(Aa)

= colk(A) + δk,s colr(A) a

(iv) Let us denote xj = (x1,j , . . . , xm,j) ∈ Rm, that is xj is the j-th columnof the matrix X = xi,j ∈ matm,n(R). By definition the diagonalmatrix D := dia(a1, . . . , an) has δi,jaj as its (i, j)-th entry. We beginby proving the second equality by simple matrix-multiplication

enti,k(XD) =

n∑j=1

xi,jδj,kak = xi,kak

Therefore the k-th colums of XD is colk(XD) = (x1,kak, . . . , xm,kak)which is xkak as we had claimed. In the first equality the xj are usedas rows, that is we actually deal with the transpose X∗ of X. Hencethe correct ansatz is

enti,k(DX∗) =

n∑j=1

δi,jajxk,j = aixk,i

315

Page 316: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Therefore the i-th row of X∗ is rowi(DX∗) = (aix1,i, . . . , aixm,i) which

is aixi. Hence we also have shown the first equality.

(v) These equations are straightforward computations using the definitionNn = (δi+1,j). Let us denote xj = (x1,j , . . . , xm,j) ∈ Rm, then X =(xi,j) is the (m × n)-matrix composed of the columns xj . Hence thetransposed matrix X∗ has the rows xj . Thereby for any i, k ∈ 1 . . . n

enti,k(XNn) =n∑j=1

xi,j δj+1,k =

xi,k−1 if k > 1

0 if k = 1

enti,k(NnX∗) =

n∑j=1

δi+1,j xk,j =

xk,i+1 if i < n

0 if i = n

Once the entries are established, it is easy to evaluate the rows andcolums of these matrix products

colk(NnX) = (x1,k−1, . . . , xm,k−1) =

xk−1 if k > 1

0 if k = 1

rowi(NnX∗) = (x1,i+1, . . . , xm,i+1) =

xi+1 if i < n

0 if i = n

(vi) Recall the definition of the permutation matrix Pσ = (δσ(i),j). Letus denote xj = (x1,j , . . . , xm,j) ∈ Rm and X = (xi,j) again. Thenthe (m × n)-matrix composed of the columns xj and the transposedmatrix X∗ has the rows xj . Thereby for any i, k ∈ 1 . . . n

enti,k(PσX∗) =

n∑j=1

δσ(i),j xk,j = xk,σ(i)

Hence the i-th row of PσX∗ is (x1,σ(i), . . . , xm,σ(i)) which is nothing

but xσ(i). Now we supposed that σ is bijective, that is the equationσ(j) = k has the unique solution j = σ−1(k). Then we get

enti,k(XPσ) =n∑j=1

xi,j δσ(j),k = xi,σ−1(k)

Thereby the k-th column of XPσ is (x1,σ−1(k), . . . , xm,σ−1(k)) whichnothing but xσ−1(k). Altogether we have proved the equalities given.

2

Proof of (4.13):

(i) It is clear that the map given is an R-module homorphism, as the op-erations are pointwise, in both cases. The bijectivity is trivial as well.As also (1, . . . , 1) 7→ dia(1, . . . , 1) = 11 n it only remains to prove, thatthe map also transfers the respective multiplications. Hence consider

316 17 Proofs - Rings and Modules

Page 317: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

any a1, . . . , an and b1, . . . , bn ∈ R. Using (4.12.(ii)) we find

dia(a1, . . . , an) dia(b1, . . . , bn)

=

(n∑i=1

aiEi,i

) n∑j=1

bj Ej,j

=

n∑i=1

n∑j=1

ai bj Ei,iEj,j

=

n∑i=1

n∑j=1

ai bj δi,j Ei,j

=n∑i=1

ai biEi,i = dia(a1 b1, . . . , an bn)

That is we have shown that the map given truly is an isomorphism ofR-algebras. Also dia(a1, . . . , an) dia(b1, . . . , bn) = 11 n = dia(1, . . . , 1) issatisfied if and only if aibi = 1 for any i ∈ 1 . . . n. Likewise biai = 1for any i ∈ 1 . . . n. But this is equivalent to ai ∈ R∗ being a unit in Rwith inverse bi = a−1

i . Hence dia(a1, . . . , an) is invertible if and onlyai ∈ R∗ for any i ∈ 1 . . . n.

(ii) This is just a special case of (i) - the i-th entry is fixed to be a andthe other n− 1 entries are 1, which always is a unit 1 ∈ R∗ of R.

(iii) Using (4.12.(ii)) again, the multiplication property of the transpositionmatrices is an easy computation

Ti,j(a)Ti,j = ( 11 n + aEi,j)( 11 n + bEi,j)

= 11 n + (a+ b)Ei,j + abE2i,j

= 11 n + (a+ b)Ei,j

= Ti,j(a+ b)

Hereby E2i,j = 0, as we have assumed i 6= j to be distinct. Now,

as Ti,j(0) = 11 n we see that Ti,j(a)Ti,j(−a) = 11 n and of course alsoTi,j(−a)Ti,j(a) = 11 n. That is Ti,j(a) is invertible and the inverse isgiven to be Ti,j(−a).

(iv) Let E = e1, . . . , en be the Euclidean basis of Rn. Then we use thedefinition of the permutation matrix Pτ and the multiplication formula(4.12.(vi)) for permutation matrices

Pσ Pτ = Pσ

eτ(1)...

eτ(n)

=

eστ(1)...

eστ(n)

= Pστ

Again it is clear that truly Pid = 11 n. Hence if σ is bijective, thenPσPσ−1 = Pid = 11 n and analogously P−1

σ Pσ = Pid = 11 n such that Pσis invertible and the inverse is given to be Pσ−1 . Conversely supposePσPτ = 11 n and PτPσ = 11 n. By comparing coefficients we find thatστ = id and τσ = id, that is τ is an inverse function of σ and in

317

Page 318: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

particular σ is bijective.

2

Proof of (1.47):

• (? = semi-ring) as any Pi is a sub-semiring we have 0 ∈ Pi and hence0 ∈

⋂i Pi again. Now consider a, b ∈

⋂i Pi that is a, b ∈ Pi for any

i ∈ I. As any Pi is a sub-semiring this implies that a + b, ab and−a ∈ Pi (for any i ∈ I). And thereby a+ b, ab and −a ∈

⋂i Pi again.

• (? = ring) in complete analogy we have 1 ∈ Pi (for any i ∈ I, as anyPi has been assumed to be a subring. Hence we also get 1 ∈

⋂i Pi.

• (? = (skew)field) consider 0 6= a ∈⋂i Pi. That is a ∈ Pi for any i ∈ I

and as any Pi is a subfield we get a−1 ∈ Pi such that a−1 ∈⋂i Pi.

• (? = (left)ideal) consider any b ∈ R and a ∈⋂i Pi - that is a ∈ Pi

for any i ∈ I. As Pi is a left-ideal we find bai ∈ Pi again and henceba ∈

⋂i Pi. In the case of ideals Pi we analogously find ab ∈

⋂i Pi.

• Note that thereby the generation of ? is well-defined - the set of all ?scontaining X is non-empty, as R itself is a ? (e.g. R is an ideal of R).And by the lemma we have just proved, the intersection is a ? again.

2

Proof of (3.9):

• First suppose M is an R-module and Pi ≤m M are R-submodulesand let P :=

⋂i Pi ⊆ M . In particular we get 0 ∈ Pi for any i ∈ I

and hence 0 ∈ P . And if we are given any x, y ∈ P this means x,y ∈ Pi for any i ∈ I. And as any Pi ≤m M is an R-submodule thisimplies x + y ∈ Pi which is x + y ∈ P again, as i has been arbitrary.Likewise consider any a ∈ R, then axi ∈ Pi again and hence ax ∈ P .Altogether P ≤m M is an R-submodule of M .

• Now suppose M is an R-semi-algebra and Pi ≤b A are R-sub-semi-algebras and let P :=

⋂i Pi ⊆ M again. In particular Pi ≤m M

and, as we have already seen, this implies P ≤m M . Thus considerany f , g ∈ P that is f , g ∈ Pi for any i ∈ I. As any Pi is an R-sub-semi-algebra we get fg ∈ Pi again and hence fg ∈ P , as i has beenarbitrary. Thus P is an R-sub-semi-algebra again.

• Next suppose M is an R-algebra with unit element 1, Pi ≤a M areR-subalgebras and let P :=

⋂Pi once more. Then we have just seen

that P ≤b M again. But as Pi ≤a M we have 1 ∈ Pi again. And asthis is true for any i ∈ I this implies 1 ∈ P , such that P ≤a M .

• Finally consider the R-semialgebra A and the R-algebra-ideals ai aAand let a :=

⋂i ai ⊆ A. As our first claim we have already shown

a ≤m A, thus consider any f ∈ a and any g ∈ M . Then we havef ∈ ai for any i ∈ I and as this is an R-sub-semialgebra this implies fgand gf ∈ ai again. And as i is arbitrary this finally is fg and gf ∈ asuch that a a A, again.

318 17 Proofs - Rings and Modules

Page 319: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (1.50):

• Let us denote the set a := ∑

i aixi of which we claim a = 〈X 〉m.First of all it is clear that X ⊆ a, as x = 1x ∈ a for any x ∈ X.Further it is clear that a ≤m R is a left-ideal of R: choose anyx ∈ X, then 0 = 0 · x ∈ a, and if b ∈ R, x = a1x1 + · · · + amxm andy = b1y1 + · · ·+ bnyn ∈ a then bx = (ba1)x1 + · · ·+ (bam)xm ∈ a andx+ y = a1x1 + · · ·+ amxm + b1y1 + · · ·+ bnyn ∈ a. Thus by definitionof 〈X 〉m we get 〈X 〉m ⊆ a. Now suppose b ≤m R is any left-idealof R containing X ⊆ b, Then for any ai ∈ R and any xi ∈ X ⊆ b weget aixi ∈ b and hence

∑i aixi ∈ b, too. That is a ⊆ b, and as b has

been arbitrary this means a ⊆ 〈X 〉m, by the definition of 〈X 〉m.

• Denote the set P := ∑

i

∏j xi,j of which we claim P = 〈X 〉s again.

Trivially we have X ⊆ P and further it is clear that P ≤s R is asub-semialgebra of R: choose any x ∈ X, then 0 = 0 · x ∈ P , and ifa =

∑i

∏j xi,j and b =

∑k

∏l yk,l, then −a =

∑i(−xi,1)

∏j≥2 xi,j ∈

P , a + b =∑

i

∏j xi,j +

∑k

∏l yk,l ∈ P and by general distributivity

also ab =∑

i

∑j(∏j xi,j

∏l yk,l) ∈ P . Thus by definition of 〈X 〉s we

get 〈X 〉s ⊆ P . Now suppose Q ≤s R is any sub-semi-ring of R.Then for any and any x ∈ X ⊆ Q we also get −x ∈ Q and hence±X ⊆ Q. Now consider any xi,j ∈ ±X ⊆ Q then

∏j xi,j ∈ Q and

hence∏j xi,j ∈ Q. That is P ⊆ Q, and as Q has been arbitrary, this

means P ⊆ 〈X 〉s, by definition of 〈X 〉s.

• By construction P := 〈X ∪ 1 〉s is a sub-semiring containing 1, inother words a subring. And as X ⊆ P this implies 〈X 〉r ⊆ P . Andas any subring Q ≤r R contains 1 we get X ⊆ Q =⇒ X ∪ 1 ⊆Q =⇒ P ⊆ Q. This also proves 〈X 〉r ⊆ P , as Q has been arbitrary.

• Now suppose R is a field, and let P :=ab−1 | a, b ∈ 〈X 〉r, b 6= 0

.

First of all 0 = 0 · 1−1 and 1 = 1 · 1−1 ∈ P . And if ab−1 and cd−1 ∈ P ,then we get (ab−1 + cd−1 = (ad + bc)(bd)−1 ∈ P and (ab−1)(cd−1) =(ab)(cd)−1 ∈ P . And if ab−1 6= 0 then we in particular have a 6= 0 andhence (ab−1)−1 = ba−1 ∈ P . Clearly X ⊆ P , as even X ⊆ 〈X 〉r.Altogether P is a field containing X and hence 〈X 〉f ⊆ P . And ifconversely Q ⊆ R is a field containing X, then 〈X 〉r ⊆ Q. Thus isa, b ∈ Q with b 6= 0 then b−1 ∈ Q and hence ab−1 ∈ Q, as Q is a field.But this proves P ⊆ Q and thus P ⊆ 〈X 〉f.

2

Proof of (3.11):

• Let us denote the set P := ∑

i aixi of which we claim P = 〈X 〉m.First of all it is clear that X ⊆ P , as x = 1x ∈ P for any x ∈ X.Further it is clear that P ≤m M is a submodule of M : choose anyx ∈ X, then 0 = 0 x ∈ P , and if a ∈ R, x = a1x1 + · · ·+ amxm andy = b1y1 + · · · + bnyn ∈ P then ax = (aa1)x1 + · · · + (aam)xm ∈ Pand x + y = a1x1 + · · · + amxm + b1y1 + · · · + bnyn ∈ P . Thus bydefinition of 〈X 〉m we get 〈X 〉m ⊆ P . Now suppose Q ≤m M is any

319

Page 320: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

submodule of M containing X ⊆ Q. Then for any ai ∈ R and anyxi ∈ X ⊆ Q we get aixi ∈ Q and hence

∑i aixi ∈ Q, too. That is

P ⊆ Q, and as Q has been arbitrary this means P ⊆ 〈X 〉m, by thedefinition of 〈X 〉m.

• Denote the set P := ∑

i ai∏j xi,j of which we claim P = 〈X 〉b.

Again it is clear, that X ⊆ P , as x = 1x ∈ P for any x ∈ X. Further itis clear that P ≤b A is a sub-semialgebra of A: choose any x ∈ X, then0 = 0 x ∈ P , and if a ∈ R, f =

∑i ai∏j xi,j and g =

∑k bk

∏l yk,l,

then af =∑

i(aai)∏j xi,j ∈ P , f+g =

∑i ai∏j xi,j +

∑k bk

∏l yk,l ∈

P and by general distributivity fg =∑

i

∑j(aibj)(

∏j xi,j

∏l yk,l) ∈ P .

Thus by definition of 〈X 〉b we get 〈X 〉b ⊆ P . Now supposeQ ≤b A isany sub-semi-algebra of A. Then for any ai ∈ R and any xi,j ∈ X ⊆ Qwe get

∏j xi,j ∈ Q, hence ai

∏j xi,j ∈ Q and finally

∑i ai∏j xi,j ∈ Q.

That is P ⊆ Q, and as Q has been arbitrary, this means P ⊆ 〈X 〉b,by definition of 〈X 〉b.

• By construction P := 〈X ∪ 1 〉b is a sub-semi-algeba containing 1,in other words a subalgebra. And as X ⊆ P this implies 〈X 〉a ⊆ P .And as any subalgebra Q ≤a A contains 1 we get the implicationsX ⊆ Q =⇒ X ∪ 1 ⊆ Q =⇒ P ⊆ Q. This also proves〈X 〉a ⊆ P , as Q has been arbitrary.

2

Proof of (1.51):We first have to prove, that a∩ b, a + b and a b truly are ideals of R. In thecase of a ∩ b this has already been done in (1.47). For a + b it is clear that0 = 0+0 ∈ a+b. and if a+b and a′+b′ ∈ a+b then also (a+b)+(a′+b′) =(a+a′)+(b+b′) ∈ a+b and −(a+b) = (−a)+(−b) ∈ a+b. Thus consider anyf ∈ R, as a and b are ideals we have fa, af ∈ a and fb, bf ∈ b again. Hencewe get f(a+ b) = (fa) + (fb) and (a+ b)f = (af) + (bf) ∈ a + b. Likewise0 = 0 · 0 ∈ a b. And if f =

∑i aibi and g =

∑j cjdj ∈ a b then it is clear

from the definition that f + g ∈ a b. Further −f =∑

i(−ai)bi ∈ a b and forany arbitrary h ∈ R we also get hf =

∑i(hai)bi and fh =

∑i ai(bih) ∈ a b

by the same reasoning as for a + b.Thus it remains to prove the chain of inclusions a b ⊆ a∩b ⊆ a ⊆ a+b.

Hereby a ⊆ a + b is clear by taking b = 0 in a+ b ∈ a + b. And a∩ b ⊆ a istrue trivially. Thus it only remains to verify a b ⊆ a∩b. That is we considerf =

∑i aibi ∈ a b. Since ai ∈ a and a is an ideal we have aibi ∈ a and hence

even f ∈ a. Likewise we can verify f ∈ b such that f ∈ a ∩ b as claimed.

2

Proof of (1.52):The associativity of ∩ is clear (from the associativity of the locical and).The associativity of ideals (a + b) + c = a + (b + c) is immediate from theassociativity of elements (a+ b) + c = a+ (b+ c). The same is true for thecommutativity of ∩ and +. Next we prove (a b) c ⊆ a (b c). That is we aregiven an element of (a b) c which is of the form

n∑j=1

(m∑i=1

ai,jbi,j

)cj =

m∑i=1

n∑j=1

ai,j(bi,jcj)

320 17 Proofs - Rings and Modules

Page 321: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

For some ai,j ∈ a, bi,j ∈ b and cj ∈ c. However from the latter representationit is clear that this element is also contained in a (b c), which establishesthe inclusion. The converse inclusion can be shown in complete analogy,which proves the associativity (a b) c = a (b c). And if R is commutative thecommutativity of ideals a b = b a is immediate from the commuativity of theelements ab = ba. Thus it remains to check the distributivity

a (b + c) = (a b) + (a c)

For the inclusion ” ⊇ ” we are given elements ai ∈ a, bi ∈ b and ci ∈ c(where i ∈ 1 . . .m) such that

∑i ai(bi + ci) ∈ a (b + c). But it is already

obvious that∑

i ai(bi + ci) =∑

i aibi +∑

i aici ∈ (a b) + (a c). And for theconverse inclusion ”⊆ ” we are given ai, a

′j ∈ a, bi ∈ b and cj ∈ c such that∑

i aibi+∑

j a′jcj ∈ (a b)+(a c). This however can be rewritten into the form∑

i aibi +∑

j a′jcj =

∑i ai(bi + 0) +

∑j a′j(0 + cj) ∈ a (b + c). In complete

analogy it can be shown, that (a + b) c = (a c) + (b c).

2

Proof of (1.54):By assumtion we have a = 〈X 〉i and b = 〈Y 〉i, in particular X ⊆ a andY ⊆ b. Hence we get X ∪ Y ⊆ a ∪ b ⊆ a + b and therefore 〈X ∪ Y 〉i ⊆a + b (as a + b is an ideal of R). For the converse inclusion we regardf =

∑i aixibi ∈ a and g =

∑j cjyjdj (note that this representation with

xi ∈ X and yj ∈ Y is allowed, due to (1.50)). And hence we get f + g =∑i aixibi+

∑j cjyjdj ∈ 〈X∪Y 〉i. Together we have found a+b = 〈X∪Y 〉i.

For a b = 〈X Y 〉i we have assumed R to be commutative. The one incluionis clear: as X ⊆ a and Y ⊆ b we trivially have X Y ⊆ a b. And as a b is anideal of R this implies 〈X Y 〉i ⊆ a b. For the converse inclusion we regardf ∈ a and g ∈ b as above. Then fg =

∑i

∑j aibixiyjcjdj ∈ 〈X Y 〉i (due to

the commutativity of R) which also establishes 〈X Y 〉i = a b.

2

Proof of (1.55):

(i) First of all it is clear that b \ a ⊆ b. And hence we find 〈b \ a 〉i ⊆〈b 〉i = b (as b is an ideal already). Thus we have proved

〈b \ a 〉i ⊆ b

Now fix any b ∈ b \ a (this is possible by assumption a ⊂ b) andconsider any a ∈ a. Then a−b ∈ b but a−b 6∈ a (else let α := a−b ∈ a,then b = a − α ∈ a, a contradiction). Thus a − b ∈ b \ a and hencea = b+(a−b) ∈ 〈b\a 〉i. As a has been arbitrary this yields a ⊆ 〈b\a 〉iand thereby we have also proved the converse inclusion

b = a ∪ (b \ a) ⊆ a ∪ 〈b \ a 〉i = 〈b \ a 〉i

(ii) By (1.50) the ideal generated by the union of the ai consists of finitesums of elements of the form aixibi where ai, bi ∈ R and xi ∈ ai. Butas ai is an ideal we get aixibi ∈ ai already. And conversely any finitesum of elements xi ∈ ai can be realized, by taking ai = bi = 1.

321

Page 322: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) Here we just rewrote the the equality in (ii) in several ways: givena sum

∑i∈I ai where ai ∈ ai and Ω := i ∈ I | ai 6= 0 is finite then∑

i∈I ai =∑

i∈Ω ai by definition of arbitrary sums. And likewise thisis just

∑k ak where n := #Ω, Ω = i(1), . . . , i(n) and ak = ai(k).

(iv) The general distributivity rule can be seen by a straightforward compu-tation using the general rules of distributivity and associativity. Firstnote that by definition and (iii) we have

∑i∈I

ai b =

∑i∈Ω

n(i)∑k=1

ai,kbi,k

∣∣∣∣∣∣ ai,k ∈ ai, bi,k ∈ b

Now let n := maxn(i) | i ∈ Ω and for any k > n(i) let ai,k := 0 ∈ ai.Then we may rewrite the latter set in the form

∑i∈I

ai b =

n∑k=1

∑i∈Ω

ai,kbi,k

∣∣∣∣∣ ai,k ∈ ai, bi,k ∈ b

Let us now regart the other set involved. Again we will compute whatthis set means explictly - and thereby we will find the same identityas the one wh have just proved(∑

i∈Iai

)b =

n∑k=1

∑i∈Ω(k)

ai,k

bk

∣∣∣∣∣∣ ai,k ∈ ai, bk ∈ b

Then (in analogy to the above) we take Ω := Ω(1)∪· · ·∪Ω(n), ai,k := 0for i 6∈ Ω(k) and bi,k := bk. Then this turns into(∑

i∈Iai

)b =

n∑k=1

∑i∈Ω

ai,kbi,k

∣∣∣∣∣ ai,k ∈ ai, bi,k ∈ b

2

Proof of (1.56):

(i) (a) =⇒ (b): 1 ∈ R = a + b =a+ b | a ∈ a, b ∈ b

is clear. Hence

there are a′ ∈ a and b ∈ b such that 1 = a′ + b. Now let a := −a′ ∈ athen 1 = (−a) + b and hence b = 1 + a ∈ (1 + a) ∩ b. (b) =⇒ (c):let b ∈ (1 + a) ∩ b, that is there is some a′ ∈ a such that b = 1 + a′.Again we let a := −a′ ∈ a, then a+ b = 1. (c) =⇒ (a): by the samereasoning as before we find 1 = a+ b ∈ a + b. And as a + b is an idealof R, this implies a + b = R.

(ii) As a and b are coprime there are some a ∈ a and b ∈ b such that

322 17 Proofs - Rings and Modules

Page 323: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

a+ b = 1 - diue to (i). Using these we compute

1 = 1i+j = (a+ b)i+j =

i+j∑h=0

(i+ j

h

)ahbi+j−h

=i∑

h=0

(i+ j

h

)ahbi+j−h +

i+j∑h=i+1

ahbi+j−h

= bji∑

h=0

(i+ j

h

)ahbi−h + ai

j∑h=1

(i+ j

i+ h

)ahbj−h

∈ bj

+ ai

(iii) We will use induction on k. The case k = 1 is trivial and hence wefound the induction at k = 2. In this case let us first choose a1, a2 ∈ a,b1 ∈ b1 and b2 ∈ b2 such that a1 + b1 = 1 and a2 + b2 = 1. Then

b1b2 = (1− a1)(1− a2) = 1− a1 − a2 + a1a2

This implies 1 = (a1 + a2 − a1a2) + b1b2 ∈ a + b1b2 which rests thecase k = 2. So we now assume k ≥ 2 and are given a + bi = Rfor any i ∈ 1 . . . k + 1. By induction hypothesis we in particular geta + b1 . . . bk = R. And together with a + bk+1 = R and the case k = 2this yields the induction step a + (b1 . . . bk)bk+1 = R.

(iv) We will prove the two inclusions by induction on k seperately. Thecase k = 1 is trivial, so we will start with k = 2 only. In this case theinclusion a1a2 ⊆ a1 ∩ a2 has already been proved. And if k ≥ 2 then

(a1 . . . ak)ak+1 ⊆ (a1 . . . ak) ∩ ak+1 ⊆ (a1 ∩ · · · ∩ ak) ∩ ak+1

due to the case k = 2 and the induction hypothesis respectively. Forthe converse inclusion we start with k = 2 again. And as a1 and a2

are coprime we may find a1 ∈ a1 and a2 ∈ a2 such that a1 + a2 = 1. Ifnow x ∈ a1 ∩ a2 is an arbitrary element then

x = x1 = x(a1 + a2) = xa1 + xa2 = a1x+ a2x ∈ a1a2

This proves a1 ∩ a2 ⊆ a1a2 and hence we may commence with k ≥ 2.In this case we are given some x ∈ a1 ∩ · · · ∩ ak ∩ ak+1. In particularthis is x ∈ a1∩· · ·∩ak and hence x ∈ a1 . . . ak by induction hypothesis.Thus we get x ∈ (a1 . . . ak)∩ak+1. But because of (iii) (using a := ak+1

and bi := ai) we know that a1 . . . ak and ak+1 are coprime. Thereforewe may apply the case k = 2 again to finally find x ∈ (a1 . . . ak)ak+1.

2

Proof of (1.57):We first prove that ∼ is an equivalence relation: Reflexivity a ∼ a is clear,as a − a = 0 ∈ a. Symmetry if a ∼ b then a − b ∈ a and hence b − a =−(a − b) ∈ a which proves b ∼ a. Transitivity if a ∼ b and b ∼ c then

323

Page 324: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

a− c = (a− b) + (b− c) ∈ a such that a ∼ c. If now a ∈ R then

[a] = b ∈ R | a ∼ b = b ∈ R | a− b ∈ a = b ∈ R | ∃h ∈ a : a− b = h = a+ h | h ∈ a = a+ a

Now assume that a i R even is an ideal, then we have to prove, that R/ais a ring under the operations + and · as we have introduced them. Thuswe prove the well-definedness: suppose a+ a = a′+ a and b+ a = b′+ a thena− a′ and b− b′ ∈ a and hence (a+ b)− (a′ + b′) = (a− a′) + (b− b′) ∈ a.This means a+ b ∼ a′+ b′ and hence (a+ b) + a = (a′+ b′) + a. Analogously

ab− a′b′ = ab− a′b+ a′b− a′b′ = (a− a′)b+ a′(b− b′)

As a is an ideal both (a − a′)b and a′(b − b′) are contained in a and hencealso the sum ab−a′b′ ∈ a. This means ab ∼ a′b′ and hence ab+ a = a′b′+ a.Hence these operations are well-defined. And all the properties (associa-tivity, commutativity, distributivity and so forth) are immediate from therespective properties of R. This also shows that 1 + a is the unit element ofR/a (supposed 1 is the unit element of R).

2

Proof of (3.13):

• We first prove that ∼ truly is an equivalence relation, the reflexivityx ∼ x is clear, as x − x = 0 ∈ P . For the symmetry suppose x ∼ y,that is y− x ∈ P and hence x− y = −(y− x) = (−1)(y− x) ∈ P . Forthe transitivity we finally consider x ∼ y and y ∼ z. That is y−x ∈ Pand z− y ∈ P and this yields z− x = (z− y) + (y− x) ∈ P . As in theproof of (1.57) it is clear that [x] = x+ P , as

[x] = y ∈M | y − x ∈ P = x+ P

• Next we will prove the well-definedness of the addition and scalar-multiplication. Thus consider x + P = x′ + P and y + P = y′ + P ,that is x′ − x ∈ P and y′ − y ∈ P . As P is an R-submodule weget (x′ + y′) − (x + y) = (x′ − x) + (y′ − y) ∈ P and this againis (x + y) + P = (x′ + y′) + P . Likewise for any a ∈ R we get(ax′)− (ax) = a(x′ − x) ∈ P , which is ax+ P = ax′ + P .

• Now we immediately see, that M/P is an R-module again. The asso-ciativity and commutativity of + are inherited trivially from M . Andthe same is true for the compatibility properties of . The neutralelement of M/P is given to be 0 + P , as for any x+ P ∈M/P we get(x+P )+(0+P ) = (x+0)+P = x+P . And the inverse of x+P ∈M/Pis just (−x) + P , as (x+ P ) + ((−x) + P ) = (x+ (−x)) + P = 0 + P .

• Finally suppose A is an R-(semi)algebra, then we want to show thatA/a is an R-(semi)algebra again. As the R-algebraideal a is an R-submodule A/a already is an R-module, by what we have alreadyshown. Next we have to verify the well-definedness of the multi-plication. Thus consider f + a = f ′ + a and g + a = g′ + a, then

324 17 Proofs - Rings and Modules

Page 325: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(f ′g′)− (fg) = f ′(g′ − g) + (f ′ − f)g ∈ a, by assumption on a. Now itis clear that A/a inherits the associativity of the multiplication fromA. Likewise the compatibility properties of ·, + and are inheritedtrivially. As an example let us regard the identity of(

a(f + a))

(g + a) = (af + a)(g + a) = (af)g + a = a(fg) + a

(f + a)(a(g + a)

)= (f + a)(ag + a) = f(ag) + a = a(fg) + a

a(

(f + a)(g + a))

= a(fg + a) = a(fg) + a

2

Proof of (1.59):First of all we have (a) =⇒ (c) because b ∈ b + nZ = a + nZ gives somec ∈ nZ such that b = a + c. And c ∈ nZ means nothing but c = kn forsome k ∈ Z. For (c) =⇒ (b) we use division with remainder (1.27) to writea = pn+ r and b = qn+ s such that r = a mod n and s = b mod n. Thenwe can compute

qn+ s = b = a+ kn = pn+ r + kn = (p+ k)n+ r

which yields s− r = (p+ k− q)n such that |s− r| = |p+ k− q| ·n. As r ands both are positive, this implies that 0 ≤ |s− r| < max s, r < n. But thiscan only be, if |p+ k− q| = 0 such that q = n+ k and r = s. And the latterjust is the claim (b). It remains to prove (b) =⇒ (a): continueing withthe notation we have r = s and hence b − a = qn + s − qn − r = (q − p)n.Letting k := q− p we have b− a = kn ∈ nZ. That is a and b are equivalentand hence have the same equivalency classes a+ nZ = b+ nZ.

2

Proof of (1.61):

• We first prove that b/a truly is an ideal of R/a. Thus regard a+a andb+a ∈ b/a, that is a, b ∈ b. Then we clearly get a+ b and −a ∈ b suchthat (a+a)+(b+a) = (a+b)+a ∈ b/a and −(a+a) = (−a)+a ∈ b/a.Further if f + a ∈ R/a (that is f ∈ R) then fb, bf ∈ b, as b is an ideal.And hence (f+a)(b+a) = (fb)+a ∈ b/a, likewise (b+a)(f+a) ∈ b/a.

• Next we prove that b := b ∈ R | b+a ∈ u i R is an ideal, supposedu i R/a is an ideal. Thus consider a, b ∈ b then (a + b) + a =(a+ a) + (b+ a) ∈ u and (−a) + a = −(a+ a) ∈ u which proves a+ band −a ∈ b. And if f ∈ R then (fb) + a = (f + a)(b + a) ∈ u whichproves fb ∈ b (likewise bf ∈ b. Hence b is an ideal, and a ⊆ b is clear,as for any a ∈ a we get a+ a = 0 + a ∈ u.

• Thus we have proved that the maps b 7→ b/a and u 7→ b are welldefined. We now claim that they even are mutually inverse. For thefirst composition we start with a ⊆ b i R. Then

b 7→ b/a 7→

b ∈ R | b+ a ∈ b/

a

=b | b ∈ b

= b

325

Page 326: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

For the converse composition we start with an ideal u i R/a, whichis mapped to b := b ∈ R | b+ a ∈ u. Now it is easy to see that

u 7→ b 7→ b/a = b+ a | b+ a ∈ u = u

• Next we prove that this correspondence respects intersections, sumsand products of ideals as claimed. That is b and c i R are idealswith a ⊆ b and a ⊆ c. Then

b/c ∩

c/c =

b+ a

∣∣ b ∈ b ∩ c

= b ∩ c/c

b/c + c/

c =

(b+ a) + (c+ a)∣∣ b ∈ b, c ∈ c

=

(b+ c) + a

∣∣ b ∈ b, c ∈ c

= b + c/c

b/cc/c =

n∑i=1

(bi + a)(ci + a)

∣∣∣∣∣ bi ∈ b, ci ∈ c

=

(n∑i=1

bici

)+ a

∣∣∣∣∣ bi ∈ b, ci ∈ c

= b c/

c

• The final statement of the lemma is concerned with the generation ofideals. For the inclusion ” ⊇ ” we consider any u ∈ 〈X/a 〉i. That isthere are ai + a, bi + a ∈ R/a and xi + a ∈ X/a such that

u =

k∑i=1

(ai + a)(xi + a)(bi + a)

=

(k∑i=1

aixibi

)+ a ∈ 〈X 〉i + a/

a

And for the converse inclusion ”⊆ ” we are given some a ∈ a andy ∈ 〈X 〉i. This again means that there are some ai, bi ∈ R andxi ∈ X such that y =

∑i aixibi. We now undo the above computation

(a+ y) + a = y + a =

k∑i=1

(ai + a)(xi + a)(bi + a) ∈ 〈X/a 〉i

• We now prove that this correspondence even interlocks radical ideals.First note that b iR is radical iff

√b = b. Now remark that (b+a)k =

bk + a ∈ b/a is (by definition) equivalent to bk ∈ b. Thus we found√b/a =

√b/a

And in particular we see that b/a is a redical ideal if and only if b is aradical ideal. And this has been claimed.

• If p i R/a is prime then regard any a, b ∈ R. Then (a+ a)(b+ a) =ab+a ∈ p/a is equivalent, to ab ∈ p which again is equivalent, to a ∈ por b ∈ p (as p is prime). And again this is equivalent, to a + a ∈ p/aor b+ a ∈ p/a. Hence p/a is prime, too. For the converse implication

326 17 Proofs - Rings and Modules

Page 327: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

just pursue the same arguments in inverse order.

• As the correspondence b ←→ b/a maintains the inclusion ⊆ of sub-sets, it is clear that maximal elements mutually correspond to eachother. That is m/a is maximal, if and only if m/a is maximal.

2

Proof of (3.14):

(i) We will first prove the eqivalency x + L ∈ P/L ⇐⇒ x ∈ P . Theone implication is trivial: if x ∈ P , then x + L ∈ P/L by definition.Conversely suppose there is some p ∈ P such that x+L = p+L ∈ P/L.By definition of M/L this means x− p ∈ L, that is x− p = ` for some` ∈ L ⊆ P . And therefore x = p+ ` ∈ P , as p, ` ∈ P and P ≤m M .

Now we are ready to prove P/L ≤m M/L, clearly 0 = 0 + L ∈ P/L,as we have 0 ∈ P . Now consider x + L and y + L ∈ P/L, by whatwe have just seen this means x and y ∈ P . As P ≤m M this impliesx + y ∈ P and hence (x + L) + (y + L) = (x + y) + L ∈ P/L again.Finally consider any a ∈ R, as x ∈ P and P ≤m M this impliesax ∈ P and hence also a(x + L) = (ax) + L ∈ P/L. Altogether P/Lis a submodule of M/L.

(ii) In (i) we have seen that P 7→ P/L is a well-defined map, so next we willpoint out that U 7→ (U) := x ∈M | x+ L ∈ U is well-defined, too.That is we have to verify (U) ≤m M and L ⊆ (U). The containmentL ⊆ (U) is clear, if ` ∈ L then ` + L = 0 + L = 0 ∈ U and hence` ∈ (U). In particular 0 ∈ (U). Next suppose we age given x, y ∈ (U),that is x+L, y+L ∈ U . Then (x+ y) +L = (x+L) + (y+L) ∈ U asU ≤m M/L and hence x+ y ∈ (U) again. Likewise for any a ∈ R weget (ax) + L = a(x + L) ∈ U and hence ax ∈ (U), as x + L ∈ U andU ≤m M/L. Thus both maps given are well-defined. It remains toverify that they are mutually inverse: the one direction is easy, considerany U ≤m M/L, then U 7→ (U) 7→ (U)/L = x+ L | x+ L ∈ U =U . Conversely consider any L ⊆ P ≤m M , then P 7→ P/L 7→ (P/L),yet using (i) we obtain (P/L) = P from the following computation(

P/L

)=x ∈M | x+ L ∈ P

/L

= x ∈M | x ∈ P = P

(iii) To prove this claim we just have to iterate the equivalency in (i):x+L ∈ (

⋂i Pi)/L is equivalent to x ∈

⋂i Pi, which again is equivalent

to ∀ i ∈ I : x ∈ Pi. Now we use (i) once more to reformulate thisinto the equivalent statement ∀ i ∈ I : x+ L ∈ Pi/L, which obviouslytranslates into x+ L ∈

⋂i(Pi/L) again.

(iv) We want to verify∑

i(Pi/L) = (∑

i Pi)/L, to do this recall the explicitdescription of an arbitrary sum of submodules for the Pi/L ≤m M/L

∑i∈I

(Pi/L

)=

∑i∈Ω

(xi + L)

∣∣∣∣∣ Ω ⊆ I finite, xi + L ∈ Pi/L

327

Page 328: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Next recall the equivalency in (i), this allows us to simply substitutexi + L ∈ Pi/L by xi ∈ Pi. Further - by definition of + in M/L - wecan rewrite the sum

∑i(xi + L) = (

∑i xi) + L. Together this yields

∑i∈I

(Pi/L

)=

(∑i∈Ω

xi

)+ L

∣∣∣∣∣ Ω ⊆ I finite, xi ∈ Pi

That is the term on the right hand side is (apart from the +L in(∑

i xi)+L) nothing but the sum∑

i Pi again (by the explicit descrip-tion of arbitrary sums of the submodules Pi. That is we have alreadyarrived at our claim

∑i(Pi/L) = (

∑i Pi)/L.

(v) Let us denote P ′i := Pi + L, then it is clear that L ⊆ P ′i for anyi ∈ I. But on the other hand

∑i P′i =

∑i(Pi + L) = (

∑i Pi) + L

as the finitely many occurances of elements `i ∈ L can be composedto a single occurance of ` :=

∑i `i ∈ L. But by assumption L is

already contained in∑

i Pi and hence (∑

i Pi) + L =∑

i Pi. That iswe have obtained

∑i P′i =

∑i Pi. Now statement (iv), that has just

been proved, yields (∑

i Pi)/L = (∑

i P′i )/L =

∑i(P′i/L), as claimed.

(vi) To prove this identity we directly refer to the definition of the R-submodule generated by X/L ⊆ M/L. And this is given to be

〈X/L 〉m =

⋂U ≤m

M/L

∣∣∣ X/L ⊆ U

By (ii) the submodules U of M/L correspond bijectively to the sub-modules P of M containing L. I.e. we may insert P/L for U , yielding

〈X/L 〉m =

⋂P/L

∣∣∣ L ⊆ P ≤m M, X/L ⊆

P/L

Clearly X/L ⊆ P/L translates into ∀x ∈ X : x + L ∈ P/L, and dueto the equivalency in (i) this is ∀x ∈ X : x ∈ P , i.e. X ⊆ P , therefore

〈X/L 〉m =

⋂P/L

∣∣∣ L ⊆ P ≤m M, X ⊆ P

Of course L ⊆ P and X ⊆ P can be written more compactly asX ∩ L ⊆ P . And due to (iii) the intersection commutes with takingto quotients, such that we may reformulate

〈X/L 〉m =

⋂P | P ≤m M, X ∩ L ⊆ P /

L

That is we have arrived at 〈X/L 〉m = 〈X ∪ L 〉m/L. But by theexplicit representation of the generated submodules (3.11) it is clear,that 〈X ∩ L 〉m = 〈X 〉m + 〈L 〉m. And as L already is a submodule ofM we have 〈L 〉m = L. Altogether 〈X/L 〉m = (〈X 〉m + L)/L.

2

Proof of (3.16):

(i) We will first prove that ann(X) is a left-ideal of R. As this is justthe intersection of the annihilators ann(x) (where x ∈ X) it suffices to

328 17 Proofs - Rings and Modules

Page 329: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

verify ann(x) ≤m R for any x ∈M , due to (1.47). Now 0 ∈ ann(x) isclear, as 0x = 0 for any x ∈M . Now consider any two a, b ∈ ann(x),that is ax = 0 and bx = 0. Then (a + b)x = (ax) + (bx) = 0 + 0 = 0and hence a + b ∈ ann(x) again. Finally consider any r ∈ R, then(ra)x = r(ax) = r 0 = 0 and hence ra ∈ ann(x), as well. Altogetherwe have ann(x) ≤m R and hence ann(X) ≤m R is a left-ideal, as itis an intersection of such.

(i) Next we want to prove that ann(M) i R even is an ideal. As wealready know ann(M) ≤m R it only remains to prove that ar ∈ann(M) for any a ∈ ann(M) and any r ∈ R. As a ∈ ann(M) we haveay = 0 for any y ∈ M . Now consider any x ∈ M , then rx ∈ M , tooas M is an R-module. Thus (ar)x = a(rx) = 0 and as x has beenarbitrary this means ar ∈ ann(M) again.

(ii) Let us denote Φ : R/ann(x)→ Rx : b := b+ ann(x) 7→ bx, we will firstprove the well-definedness and injectivity of this map: ax = Φ(a) =Φ(b) = bx is equivalent to (a − b)x = 0, that is a − b ∈ ann(x) or inother words a = a + ann(x) = b + ann(x) = b. The surjectivity of Φis clear, as any b ∈ R is allowed. And Φ also is a homomorphism ofR-modules, as by definition of the algebraic operations of R/ann(x)we have Φ(a + b) = Φ(a+ b) = (a + b)x = (ax) + (bx) = Φ(a) + Φ(b)and Φ(ab) = Φ(ab) = (ab)x = a(bx) = aΦ(b). Nota that this claimcould have also been proved by regarding the epimorphism ϕ : R →Rx : b 7→ bx, noting that its kernel is kn(ϕ) = ann(x) and invokingthe first isomorphism theorem of R-modules to get Φ.

(iii) We first have to check the well-definedness of the scalar multiplication.To do this consider any b+a = c+a, that is a := c−b ∈ a ⊆ annR(M).If now x ∈M is an arbitrary element, then cx = (a+ b)x = ax+ bx =0 + bx = bx, as a annihilates any x ∈ M . Hence (b + a, x) 7→ bx doesnot depend on the specific b ∈ R chosen. As we have only changed thescalar-multiplication on M , it remains to prove, that it still satisfiesthe properties (M) in the definition of modules. Thus consider any a,b ∈ R and any x, y ∈M . Then we can compute

(a+ a)(x+ y) = a(x+ y) = ax+ ay

= (a+ a)x+ (a+ a)y

((a+ a) + (b+ a))x = ((a+ b) + a)x = (a+ b)x

= ax+ bx = (a+ a)x+ (b+ a)x

((a+ a)(b+ a))x = ((ab) + a)x = (ab)x = a(bx)

= (a+ a)(bx) = (a+ a)((b+ a)x)

(1 + a)x = 1x = x

(iv) As R 6= 0 we have 1 6= 0 and as also 1 0 = 0 we already have0 ∈ tor(M). Now consider any two x, y ∈ tor(M), that is there aresome 0 6= a, b ∈ R such that ax = 0 and by = 0. As R is an integraldomain we also get ab 6= 0. Now compute (ab)(x+y) = (ab)x+(ab)y =(ba)x + (ab)y = b(ax) + a(by) = b 0 + a 0 = 0 + 0 = 0. That is(ab)(x+ y) = 0 and hence x+ y ∈M , as ab 6= 0. Finally consider any

329

Page 330: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

r ∈ M , then we need to show rx ∈ tor(M). But this is clear froma(rx) = (ar)x = (ra)x = r(ax) = r 0 = 0.

(iv) Let us abbreviate T := tor(M), now consider any x+T ∈ tor(M/T ).That is there is some 0 6= a ∈ R such that (ax)+T = a(x+T ) = 0+T .This is just ax ∈ T , that is there is some 0 6= b ∈ R, such that(ba)x = b(ax) = 0. As R is an integral domain, we get ba 6= 0 andthis means x ∈ T , such that x+T = 0 +T . Thus we have just provedtor(M/T ) ⊆ 0 + T = 0 and the converse containment is clear.

(v) (a) =⇒ (c): Consider any a ∈ R and any x ∈ M such that ax = 0.If a = 0, then we are done, else we have x ∈ tor(M) = 0 (bydefinition of tor(M), as ax = 0 and a 6= 0) and this is x = 0 again.

(v) (c) =⇒ (b): Consider any a ∈ zdR(M), that is there is some x ∈Msuch that x 6= 0 and ax = 0. Thus by assumption (c) we find a = 0,and as a has been arbitrary, this means zdR(M) ⊆ 0 , as claimed.

(v) (b) =⇒ (a): As R 6= 0 we have 1 6= 0 and as 1 0 = 0 this implies0 ∈ tor(M). Conversely consider any x ∈ tor(M), that is there issome a ∈ R such that a 6= 0 and ax = 0. Suppose we had x 6= 0,then a ∈ zdR(M) ⊆ 0 (by definition of zdR(M), as x 6= 0 andax = 0) which is a = 0 again. But as a 6= 0 this is a contradiction.Thus we nexessarily have x = 0 and as x has been arbitrary this meanstor(M) ⊆ 0 , as well.

(vi) Let us abbreviate P := 〈X 〉m, then by definition it is clear thatX ⊆ Pand hence ann(P ) ⊆ ann(X). For the converse inclusion consider anya ∈ ann(X), then we have to show a ∈ ann(P ). That is ap = 0 for anyp ∈ P . By (3.11) any p ∈ P is given to be p =

∑i aixi for some ai ∈ R

and xi ∈ X. Therefore ap =∑

i(aai)xi =∑

i ai(axi) =∑

i ai 0 = 0 asR is commutative. Altogether that is ann(X) = ann(P ).

Now suppose M = 〈X 〉m where X = xi | i ∈ I and consider themap ϕ : R → M I defined by ϕ(a) := (axi) (where i ∈ I). If nowa ∈ ann(M) = ann(X) then axi = 0 for any i ∈ I and hence ϕ(a) = 0is clear. Therefore we have ann(M) ⊆ kn(ϕ). Conversely ϕ(a) = 0means axi = 0 for any i ∈ I, that is a ∈ ann(X) = ann(M) and henceeven kn(ϕ) = ann(M). Thereby R/ann(M) → M I is well-definedand injective according to the first isomorphism theorem (3.51.(ii))

(vii) First suppose that x = 0, then ax = a0 = 0 is clear for any a ∈ S andhence ann(x) = S. Secondly consider x 6= 0 and a ∈ ann(x), that isax = 0. Suppose we had a 6= 0, too, then (as S is a skew-field) therewas some a−1 ∈ S such that a−1a = 1. Then we would be able tocompute x = 1x = (a−1a)x = a−1(ax) = a−10 = 0, a contradiction tox 6= 0, such that a = 0 and therefore ann(x) = 0.

(viii) Let p = ann(x) be a maximal annihilator ideal of M with p 6= R.Further consider any a, b ∈ R with ab ∈ p but b 6∈ p, then it remainsto verify a ∈ p. As b 6∈ p we have bx 6= 0. Now consider a := ann(bx),as bx 6= 0 we have 1 6∈ a and hence a 6= R. Also if p ∈ p, thenp(bx) = b(px) = b0 = 0 and hence p ∈ a, which means p ⊆ a. But bymaximality of p this is p = a. Now as ab ∈ p we have a(bx) = (ab)x = 0such that a ∈ a = p, as has been claimed.

330 17 Proofs - Rings and Modules

Page 331: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ix) Choose any x ∈M , yet x 6= 0. Then Rx = bx | b ∈ R is a non-zerosubmodule of M . But as 0 6= Rx ≤m M and M is simple we getRx = M . Hence we get

m := annR(M) = a ∈ R | ∀ y ∈ Rx : ay = 0 = a ∈ R | ∀ b ∈ R : abx = 0 = a ∈ R | ax = 0 = ann(x)

Here a ∈ R | ∀ b ∈ R : abx = 0 ⊆ ann(x) is clear - just take b =1. And the converse inclusion follows easily: if ax = 0, that abx =b(ax) = b0 = 0 as well. So by (ii) we obtain an isomorphy ofR-modules

R/m∼→M : a+ m 7→ ax

Now, as M is simple, so is R/m. That is 0 = m/m is a maximal ideal ofR/m. That is R/m is a field and this again means that m is a maximalideal of R.

(x) Let us denote the set of all proper, pure submodules P of M by P,that is P := P ≤m M | P 6= M,P is pure . As M is torsionfree 0 ≤m M is a pure submodule of M (ax = 0 implies a = 0 orx = 0). Also 0 6= M such that P is non-empty. Now consider a chain(Pi) ⊆ P in P, then we let

Q :=⋃i∈I

Pi

As (Pi) is a chain we know that Q is a submodule of M again. Nowsuppose we had Q = M . As M is finitely generated we find somemk ∈ M such that M = lh(x1, . . . , xn). And as (for any k ∈ 1 . . . n)xk ∈M = Q there is some i(k) ∈ I such that xk ∈ Pi(k). Taking to themaximum of the Pi(k) we find xk ∈ Pi for some i ∈ I. But this wouldimply Pi = M in contradiction to our assumption. Hence Q 6= M .

In fact Q even is pure again: suppose we have ax ∈ Q, that is ax ∈ Pifor some i ∈ I. And as this Pi has been pure we find x ∈ Pi whichyields x ∈ Q again. Hence every chain (Pi) in P has an upper boundQ ∈ P and therefore P has a maximal element, due to the lemma ofZorn.

2

Proof of (3.17):

• Let us first prove the identity part in the modular rule, starting with(Q∩U)+P ⊆ Q∩(P+U). Thus we consider any x ∈ Q∩U and p ∈ Pand have to verify x + p ∈ Q ∩ (P + U). As x ∈ Q and p ∈ P ⊆ Qwe have x+ p ∈ Q. And as also x ∈ U we have x+ p ∈ U + P hencethe claim. Conversely for (Q ∩ U) + P ⊇ Q ∩ (P + U) we are givenany p ∈ P and u ∈ U such that q := p+ u ∈ Q and we have to verifyp+u ∈ (Q∩U) +P . But as u = q− p and p, q ∈ Q we find u ∈ Q andhence u ∈ Q ∩ U . And this already is p+ u = u+ p ∈ (Q ∩ U) + P .

331

Page 332: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• So it ramains to verify the implication in the modular rule. As P ⊆ Qby assumption it remains to prove Q ⊆ P . Thus consider any q ∈ Q,then in particular q ∈ Q + U = P + U . That is there are p ∈ Pand u ∈ U such that q = p + u. And as p ∈ P ⊆ Q this yieldsu = q− p ∈ Q, such that u ∈ Q∩U = P ∩U . In particular u ∈ P andhence q = p+ u ∈ P , as claimed.

2

Proof of (3.30):

(i) Recall that aP has been defined to be the submodule generated byelements of the form αp with α ∈ a and p ∈ P . In the light of (3.11)this turns into

aP = lh αp | α ∈ a, p ∈ P

=

n∑i=1

biαipi

∣∣∣∣∣ 1 ≤ n ∈ N, bi ∈ R, αi ∈ a, pi ∈ P

But as a is an ideal and αi ∈ a we find that ai := biαi ∈ a again. Andhence aP is contained in the set given in claim (i). Conversely anya1p1 + · · · + anpn can be realized by taking b1 = · · · = bn = 1 andαi = ai. Thus the set given in claim (i) is also contained in aP :

aP =

n∑i=1

aipi

∣∣∣∣∣ 1 ≤ n ∈ N, ai ∈ a, pi ∈ P

(ii) As x ∈ aP there are some α1, . . . , αm ∈ a and q1, . . . , qm ∈ P such thatx = α1q1 + · · · + αmqm as we have seen in (i). But as P is generatedby pi | i ∈ I any qk can be written in the form

qk =∑i∈I

bk,ipi

for some bk,i ∈ R of which only finitely many are non-zero. Insertingthese equations into our presentation of x we find

x =m∑k=1

αkqk =m∑k=1

αk

(∑i∈I

bk,ipi

)=

∑i∈I

(m∑k=1

αkbk,i

)pi

As a is an ideal and any αk ∈ a we also find ai :=∑

k αkbk,i ∈ a. Andas there are only finitely many i ∈ I such that bk,i 6= 0, all but finitelymany ai are zero. Let us denote these non-zero ai ∈ a by a1, . . . , anand the corresponding pi by p1, . . . , pn, then we have arrived at theclaim x = a1p1 + · · ·+ anpn.

(iii) As ap ∈ P for any a ∈ a ⊆ R and any p ∈ P , so is the submodulegenerated by these elements - and this is aP .

(iv) Considering the submdules P and Q with P ⊆ Q we find that for anya ∈ a and any p ∈ P ⊆ Q we get ap ∈ aQ and hence aP ⊆ aQ.

332 17 Proofs - Rings and Modules

Page 333: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) We first prove that a(M/P ) is contained in (aM + P )/P , to do this,consider any element of a(M/P ), by definition this is of the form(where ak ∈ a and xk ∈M)

n∑k=1

ak(xk + P ) =

(n∑k=1

akxk

)+ P ∈ aM + P/

P

That is a(M/P ) ⊆ (aM + P )/P , conversely consider any element ofaM + P , that is we are given any ak ∈ a, xk ∈M and p ∈ P , then(

n∑k=1

akxk

)+ p+ P =

(n∑k=1

akxk

)+ P

=n∑k=1

ak(xk + P ) ∈ a(M/

P

)That is (aM+P )/P ⊆ a(M/P ) as well and hence we have proved theequality of these sets, which had been claimed.

(vi) By definition (3.13) of the quotient module M/aM is a well-definedR-module. Hence all we have to do is checking the well-definedness ofthe scalar-multiplication. We will present two ways of demonstratingthis: (1) clearly we get a(x + aM) = ax + aM = 0 + aM for anya ∈ a and any x ∈ M . That is a ⊆ annR(M/aM), such that wecould simply cite (3.16.(iii)). The more instructive way (2) is a directcomputation however: consider b+ a = c+ a and x+ aM = y + aM ,that is a := c − b ∈ a and p := y − x ∈ aM . Then cy − bx =cy− by+ by− bx = (c− b)y+ b(y− x) = ay+ bp. But as a ∈ a we findthat ay ∈ aM and as p ∈ aM already we see that cy − bx ∈ aM . Butthis is bx+ aM = cy+ aM and hence (b+ a, x+ aM) 7→ bx+ aM doesnot depend on the specific choice of b ∈ R or x ∈M .

(vii) We first have to check the well-definedness of ϕ/a: to do this we con-sider any x+aM = y+aM in M/aM , that is p := y−x ∈ aM . By def-inition of aM this means that p can be written as p = a1p1 + · · ·+anpnfor some ai ∈ a and pi ∈M . Then we can compute

ϕ(y)− ϕ(x) = ϕ(p) =n∑i=1

aiϕ(pi) ∈ aN

That is ϕ(y) + aN = ϕ(x) + aN , such that the map ϕ/a truly is well-defined. The properties of being a homomorphism are straightforward:consider any a ∈ R and any x, y ∈M , then

ϕ ((a+ a)(x+ aM)) = ϕ(ax+ aM) = ϕ(ax) + aN

= aϕ(x) + aN = (a+ a) (ϕ(x) + aN)

= (a+ a)ϕ (x+ aN)

ϕ ((x+ a) + (y + aM)) = ϕ ((x+ y) + aM) = ϕ(x+ y) + aN

= ϕ(x) + ϕ(y) + aN

= (ϕ(x) + aN) + (ϕ(y) + aN)

333

Page 334: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vii) If ϕ : M → N is surjective, then clearly ϕ/a : M/aM → N/aN issurjective as well: given any y + aN ∈ N/aN we choose any x ∈ Mwith ϕ(x) = y, then (ϕ/a)(x+ aM) = ϕ(x) + aN = y + aN .

If ϕ even is bijective, it remains to check the injectivity of ϕ/a. To dothis we regard the following homomorphism of R-modules

ψ : M → N/aN : x 7→ ϕ(x) + aN

That is ψ = πϕ, where π denotes the canonical projection of N ontoN/aN . Now by definition of ψ we see knψ = x ∈M | ϕ(x) ∈ aN and therefore aM ⊆ knψ. If conversely ϕ(x) ∈ aN , then ϕ(x) =a1y1 + · · ·+ anyn for some ai ∈ a and yi ∈ N . Yet as ϕ is bijective wemay define xi := ϕ−1(yi), that is yi = ϕ(xi) and hence

ϕ(x) = a1ϕ(x1) + . . . anϕ(xn) = ϕ(a1x1 + · · ·+ anxn) ∈ ϕ(aM)

By the bijectivity of ϕ this means x ∈ aM again, such that we haveproved knψ = aM . Now note that - by definiton of ψ - we have(ϕ/a)(x+ aM) = ψ(x). Therefore

knϕ/a = knψ/

aM = aM/aM = 0 ∈ M/

aM

That is ϕ/a has zero-kernel and hence is injective. As the surjectivityhas already been proved we found the bijectivity of ϕ/a.

(viii) Consider any elements z1, . . . , zn ∈ Z, then by definition of the scalarmultiplication on M/aM we already see the identity

n∑i=1

(ai + a)zi

n∑i=1

aizi

And as in both cases the ai ∈ R can be chosen completely arbitrarythis implies that the R-linear hull of Z equals the (R/a)-linear hull.

(ix) Clearly Q is an R-submodule of M/aM if and only if Q equals itsR-linear hull. Likewise Q is an (R/a)-submodule of M/aM if and onlyif Q equals its (R/a)-linear hull. But by (viii) these linear hulls areequal, and hence we get

Q ∈ submR

(M/

aM

)⇐⇒ lhR(Q) = Q = lhR/a(Q)

⇐⇒ Q ∈ submR/a

(M/

aM

)Now the explict description of the (R/a)-submodules of M/aM is justan application of the correspondence theorem (3.14) of modules.

2

Proof of (1.71):

(i) Clearly ϕ(0R) = ϕ(0R + 0R) = ϕ(0R) + ϕ(0R) and subtracting ϕ(0R)from this equation (i.e. adding −ϕ(0R) to both sides of the equation)

334 17 Proofs - Rings and Modules

Page 335: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

we find 0S = ϕ(0R). Hence ϕ(a) + ϕ(−a) = ϕ(a − a) = ϕ(0R) = 0Swhich proves that ϕ(−a) is the negative −ϕ(a).

(ii) Analogous to the above we find 1S = ϕ(1R) = ϕ(uu−1) = ϕ(u)ϕ(u−1)and 1S = ϕ(1R) = ϕ(u−1u) = ϕ(u−1)ϕ(u) which proves that ϕ(u−1)is the inverse ϕ(u)−1.

(iii) If a, b ∈ R then ψϕ(a + b) = ψ(ϕ(a) + ψ(b)) = ψϕ(a) + ψϕ(b) andlikewise for the multiplication. Hence ψϕ is a homomorphism of semi-rings again. And in the case of rings and homomorphisms of rings wealso get ψϕ(1R) = ψ(1S) = 1T .

(iv) Consider any x, y ∈ S, as Φ is surjective there are a, b ∈ R suchthat x = Φ(a) and y = Φ(b). Now Φ−1(x+ y) = Φ−1(Φ(a) + Φ(b)) =Φ−1Φ(a+ b) = a+ b = Φ−1(x) + Φ−1(y). Likewise we find Φ−1(xy) =Φ−1(x)Φ−1(y). And if Φ is a homomorphism of rings, then Φ−1(1S) =1R is clear from Φ(1R) = 1S .

(v) Since im(ϕ) = ϕ(R) the first statement is clear from (vi) by tak-ing P = R. So we only have to prove, that kn(ϕ) is an ideal of R.First of all we have 0R ∈ kn(ϕ) since ϕ(0R) = 0S by (i). And if a,b ∈ kn(ϕ) then ϕ(a + b) = ϕ(a) + ϕ(b) = 0S + 0S = 0S and hencea + b ∈ kn(ϕ). Analogously ϕ(−a) = −ϕ(a) = −0S = 0S yields−a ∈ kn(ϕ). Now let a ∈ kn(ϕ) again and let b ∈ R be arbitrary.Then ϕ(ab) = ϕ(a)ϕ(b) = 0Sϕ(b) = 0S and hence ab ∈ kn(ϕ) again.Likewise ϕ(ba) = ϕ(b)ϕ(a) = ϕ(b)0S = 0S which yields ba ∈ kn(ϕ).

(vi) As P is a sub-semi-ring of R we have 0R ∈ P and therefore 0S =ϕ(0R) ∈ ϕ(P ). Now assume that x = ϕ(a) and y = ϕ(b) ∈ ϕ(P ) forsome a, b ∈ P . As P is a sub-semi-ring of R we have a + b, −a andab ∈ P again. And hence x + y = ϕ(a) + ϕ(b) = ϕ(a + b) ∈ ϕ(P ),−x = −ϕ(a) = ϕ(−a) ∈ ϕ(P ) and xy = ϕ(a)ϕ(b) = ϕ(ab) ∈ ϕ(P ).Thus we have proved that ϕ(P ) ≤s R is a sub-semi-ring again. And ifR and S even are rings, P is a subring of R and ϕ is a homomorphismof rings, then also 1S = ϕ(1R) ∈ ϕ(P ) as 1R ∈ P .

(vi) And as Q is a sub-semi-ring of S we have 0S ∈ Q and therefore 0S =ϕ(0R) implies 0R ∈ ϕ−1(Q). Now suppose a, b ∈ ϕ−1(Q), that isϕ(a), ϕ(b) ∈ Q. Then ϕ(a + b) = ϕ(a) + ϕ(b), ϕ(−a) = −ϕ(a) andϕ(ab) = ϕ(a)ϕ(b) ∈ Q, as Q ≤s S. This means that a + b, −a andab ∈ ϕ−1(Q) and hence ϕ−1(Q) ≤s S is a sub-semi-ring of S. And inthe case of rings we also find 1R ∈ ϕ−1(Q) as in (vi) above.

(vii) First assume that b ≤m S is a left-ideal of S. We have already provedin (vi) that ϕ−1(b) ≤s S is a sub-semi-ring of S (since Q := b ≤s R).Thus it only remains to verify the following: consider any a ∈ R andb ∈ ϕ−1(b). Then ϕ(b) ∈ b and as b ≤m S we hence get ϕ(ab) =ϕ(a)ϕ(b) ∈ b. Thus ab ∈ ϕ−1(b) which means that ϕ−1(b) is a left-ideal of R again. And in the case that b i S even is an ideal, thenby the same reasoning ϕ(ba) = ϕ(b)ϕ(a) ∈ b and hence ba ∈ ϕ−1(b).

(vii) Next assume that a ≤m R is a left-ideal of R and that ϕ is surjective.Then ϕ(a) ≤s S is a sub-semi-ring of S by (vi) for P := a ≤s R.Thus it again remains to prove the following property: Consider any

335

Page 336: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

y ∈ S and x = ϕ(a) ∈ ϕ(a) (where a ∈ a). As ϕ is surjective thereis some b ∈ R with y = ϕ(b). And as ba ∈ a we hence find yx =ϕ(b)ϕ(a) = ϕ(ba) ∈ ϕ(a). Thus ϕ(a) ≤m S is a left ideal of S. Andin the case that a i R even is an ideal, then by the same reasoningxy = ϕ(a)ϕ(b) = ϕ(ab) ∈ ϕ(a).

(viii) Let ϕ : R → S be a homomorphism of semi-rings. If ϕ(1R) = 0Sthen for any a ∈ R we get ϕ(a) = ϕ(1Ra) = 0Sϕ(a) = 0S and henceϕ = 0. Else we have ϕ(1R) = ϕ(1R1R) = ϕ(1R)ϕ(1R) and dividing by0 6= ϕ(1R) (i.e. multiplying by ϕ(1R)−1) we hence find 1S = ϕ(1R).

(ix) As we have seen in (v) kn(ϕ) i R is an ideal of R. And as Ris a skew-field, it has precisely two ideals: 0 and R. In the firstcase kn(ϕ) = R we trivially have ϕ = 0. And in the second casekn(ϕ) = 0 we will soon see, that ϕ is injective.

2

Proof of (1.72):If b ∈ b ⊆ a + b then %(b) = b + a ∈ (a + b)/a is clear and hence %(b) ⊆(a+b)/a. Conversely for any a ∈ a and b ∈ b we get (a+b)+a = b+a = %(b)and hence (a + b)/a ⊆ %(b) as well.

Now consider any r ∈ %−1(%(b)) that is %(r) ∈ %(b). That is there issome b ∈ b such that r + a = %(r) = %(b) = b + a. That is r − b ∈ a andhence r − b = a for some a ∈ a. In other words r = a + b ∈ a + b whichproves %−1(%(b)) ⊆ a + b. Conversely consider any a ∈ a and b ∈ b. Then%(a+ b) = a+ b+ a = b+ a = %(b) ∈ %(b) and hence a+ b ∈ %−1(%(b)).

2

Proof of (1.74):

(i) By definition ϕ is surjective iff any x ∈ S can be reached as x = ϕ(a)for some a ∈ R. And this is just S ⊆ ϕ(R) = im(ϕ), or equivalentlyS = im(ϕ). And if ϕ is injective then there can be at most one a ∈ Rsuch that ϕ(a) = 0S . And as ϕ(0R) = 0S this implies kn(ϕ) = 0R .Conversely suppose kn(ϕ) = 0R and consider a, b ∈ R with ϕ(a) =ϕ(b). Then 0S = ϕ(a) − ϕ(b) = ϕ(a − b). Hence a − b ∈ kn(ϕ) suchthat a− b = 0R by assumption. Hence ϕ is injective.

(ii) The implication (a) =⇒ (b) is clear: just let Ψ := Φ−1 then wehave already seen, that Ψ is a homomorphism of (semi-)rings again.And the converse (b) =⇒ (a) is clear (in particular Ψ is an inversemapping of Φ). The implication (b) =⇒ (c) is trivial - just let α := Ψand β := Ψ. And for the converse (c) =⇒ (b) we simply compute:α = α 11 S = α(Φβ) = (αΦ)β = 11Rβ = β. Thus letting Ψ := α = β weare done.

(iii) Suppose R is a ring, then let f := Φ(1R). If now x ∈ S is any elementthen we choose a ∈ R such that x = Φ(a). And thereby we getfx = Φ(1R)Φ(a) = Φ(1Ra) = Φ(a) = x and likewise xf = x. Hencef = 1S is the unit element of S and hence S is a ring, too. Further wehave seen Φ(1R) = 1S such that Φ is a homomorphism of rings.

336 17 Proofs - Rings and Modules

Page 337: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Suppose S is a ring, then let e := Φ−1(1S). If now a ∈ R is anyelement, then Φ(a) = Φ(a)1S = Φ(a)Φ(e) = Φ(ae) and hence a = ae.Likewise we see ea = a and hence e = 1R is the unit element of R.Hence R is a ring, too and Φ(1R) = 1S is a homomorphism of rings.

(iv) Clearly the identity map 11R : R → R : a 7→ a is both, bijective anda homomorphism of (semi-)rings for any (semi-)ring (R,+, ·). Butthis is nothing but the claim 11R : R ∼=r R. And if Φ : R → S isan isomorphism of (semi-)rings, then Φ−1 is a bijective (this is clear)homomorphism of (semi-)rings again (this has already been proved).Hence the second implication. And if both Φ : R ∼=r S and Ψ : S ∼=r Tthen ΨΦ is bijective and a homomorphism of (semi-)rings (as Φ andΨ are such). And this has been the third implication.

2

Proof of (1.77):

(i) We first need to check, that ϕ is well-defined: thus consider b, c ∈ Rsuch that b + a = c + a. That is c − b ∈ a ⊆ knϕ. And hence0 = ϕ(c− b) = ϕ(c)− ϕ(b) which is the well-definedness ϕ(b) = ϕ(c).Now it is straightforward to see, that ϕ also is a homomorphism

ϕ(

(b+ a) + (c+ a))

= ϕ(

(b+ c) + a)

= ϕ(b+ c)

= ϕ(b) + ϕ(c) = ϕ(b+ a) + ϕ(b+ a)

ϕ(

(b+ a)(c+ a))

= ϕ(

(bc) + a)

= ϕ(bc)

= ϕ(b)ϕ(c) = ϕ(b+ a)ϕ(b+ a)

Thus ϕ always is a homomorphism of semi-rings. And if if R and Seven are rings, then R/a is a ring as well with the unit element 1 + a.Thus if ϕ(1) = 1 is a ring-homomorphism, then ϕ(1 + a) = ϕ(1) = 1is a ring-homomorphism, too.

(ii) Let a := kn(ϕ) and Φ := ϕ, then by (i) Φ is a well-defined (semi-)ring-homomorphism and the image of Φ certainly is the image of ϕ

imΦ = Φ(R/

a

)= ϕ(R) = imϕ

Hence the mapping Φ : R/a → im(ϕ) is surjective by definition. Butit also is injectitve (and hence an isomorphism) due, to

ϕ(a) = ϕ(b) ⇐⇒ 0 = ϕ(a)− ϕ(b) = ϕ(a− b)⇐⇒ a− b ∈ kn(ϕ)

⇐⇒ a+ kn(ϕ) = b+ kn(ϕ)

(iii) We will first prove that b+R ≤r S is a subring of S: clearly we have0 = 0 + 0 ∈ b +R and (if R is a subring of S) 1 = 0 + 1 ∈ b +R. Now

337

Page 338: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

let b+ a and d+ c ∈ b +R, then

(b+ a) + (d+ a) = (b+ d) + (a+ c) ∈ b +R

−(b+ a) = (−b) + (−a) ∈ b +R

(b+ a)(d+ c) = (bd+ bc+ ad) + ac ∈ b +R

And hence b+R is a subring of S. And as b i R is an ideal of S withb ⊆ b + R it trivially also is an ideal b i b + R. Now let us denotethe restriction of the canonical epimorphism σ : S → S/b to R by

% : R→ S/b : a 7→ a+ b

Then % is a homomorphism with kernel kn(%) = kn(σ) ∩ R = b ∩ R.In particular this proves that b ∩ R i R is an ideal of R (as itis the kernel of a homomorphism). And it is straightforward to see,that im(ϕ) = (b + R)/b. [For any a ∈ R we have a ∈ b + R andhence %(a) ∈ (b + R)/b. And if conversely x = b + a ∈ b + R then%(a) = a + b = x + b]. Thus we may apply the first isomorphismtheorem (ii) to obtain

R/kn(%) = R/

b ∩R ∼=r im(%) = b +R/b

a+ b ∩R 7→ %(a) = a+ b

(iv) Suppose c and d ∈ R with c + a = d + a. Then c − d ∈ a ⊆ b andhence c + b = d + b. Therfore we may define the following mapping(which clearly is a homomorphism of (semi-)rings)

ϕ : R/a→

R/b : c+ a 7→ c+ b

And clearly kn(ϕ) =b+ a | b+ b = 0 + b

=b+ a | b ∈ b

= b/a.

Hence we immediately obtain (iv) from the first isomorphism theorem.

2

Proof of (1.91):

(1) We will first prove that a⊕b i R⊕S is an ideal. As a and b are idealswe have 0 ∈ a and 0 ∈ b and hence 0 = (0, 0) ∈ a ⊕ b. Now consider(a, b) and (p, q) ∈ a⊕ b. Then we have a+ p ∈ a, b+ q ∈ b and hence(a, b)+(p, q) = (a+p, b+q) ∈ a⊕b. Finally consider any (r, s) ∈ R⊕Sthen ra ∈ a and sb ∈ b such that (r, s)(a, b) = (ra, sb) ∈ a ⊕ b again.And as R and S are commutative rings, so is R⊕S such that this hasbeen enough to guarantee that a⊕ b is an ideal of R⊕ S.

(2) Next we will prove u = %(u) ⊕ σ(u). If (a, b) ∈ u then it is clear thata = %(a, b) ∈ %(u) and b = σ(a, b) ∈ σ(u) such that (a, b) ∈ %(u)⊕σ(u).Conversely if (a, b) ∈ %(u)⊕σ(u) then there are some r ∈ R and s ∈ Ssuch that (a, s) ∈ u and (r, b) ∈ u. But as u is an ideal we also have(a, 0) = (1, 0)(a, s) ∈ u again and analogously (0, b) ∈ u. Thererby(a, b) = (a, 0) + (0, b) ∈ u proving the equality.

338 17 Proofs - Rings and Modules

Page 339: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(3) Now we are ready to prove idealR ⊕ S = a ⊕ b | a i R, b i S.We have already seen in (1) that a⊕ b is an ideal of R⊕ S (supposeda and b are ideals). Conversely if u i R ⊕ S is an ideal, then we leta := %(u) and b := σ(u). Then a i R and b i S are ideals, since %and σ are surjective homomorphisms. And u = a⊕ b by (2).

(4) Now let a iR and b iS. Then we will prove√

a⊕ b =√a⊕√b. First

suppose that (a, b)k = (ak, bk) ∈ a ⊕ b for some k ∈ N. Then ak ∈ aand bk ∈ b such that a ∈

√a and b ∈

√b and thereby (a, b) ∈

√a⊕√b.

Conversely suppose ak ∈ a and bl ∈ b for some k, l ∈ N. Then wedefine n := max k, l and thereby get (a, b)n = (an, bn) ∈ a⊕ b. Andthis means that (a, b) is contained in the radical of a⊕ b again.

(5) This enables us to prove sradR ⊕ S = a ⊕ b | a =√a, b =

√b.

First suppose that u i R⊕ S is a radical ideal and let a := %(u) andb := σ(u) again. Then by (4) we may compute

a⊕ b = u =√u =

√a⊕ b =

√a⊕

√b

This proves that a =√a and b =

√b are radical ideals of R and S

respectively. And if conversely a and b are radical ideals then a⊕ b isa radical ideal of R⊕ S by virtue of

a⊕ b =√a⊕

√b =

√a⊕ b

(6) Next we will prove specR ⊕ S = p ⊕ S ∪ R ⊕ q. If p i R isa prime ideal then we have already proved p ⊕ S i R ⊕ S in (1).And it is clear that p ⊕ S 6= R ⊕ S as (1, 0) 6∈ p ⊕ S. Thus suppose(a, b)(p, q) = (ap, bq) ∈ p ⊕ S. Then ap ∈ p and as p is prime thismeans a ∈ p or p ∈ p. Without loss of generality we may take p ∈ pand thereby (p, q) ∈ p⊕S. Altogether we have seen that p⊕S is a primeideal of R ⊕ S. And analogously one may prove that R ⊕ q is prime,supposed q i S is prime. Thus conversely consider a prime idealu iR⊕S let p := %(u) and q := σ(u). Clearly (1, 0)(0, 1) = (0, 0) ∈ u.And as u is prime this means (1, 0) ∈ u or (0, 1) ∈ u. We will assume(0, 1) ∈ u, as the other case can be dealt with in complete analogy.Then 1 ∈ q and hence q = S. That is u = p ⊕ S and as u 6= R ⊕ Sthis means p 6= R. Thus consider a and p ∈ R such that ap ∈ p.Then (a, 0)(p, 0) = (ap, 0) ∈ p⊕ S = u. And as u is prime this means(a, 0) ∈ u or (p, 0) ∈ u (which we assume w.l.o.g.). Thereby p ∈ p,altogether we have found that u = p⊕S for some prime ideal p i R.

(7) It remains to prove smaxR ⊕ S = m⊕ S ∪ R ⊕ n. Thus considera maximal ideal m i R and suppose m ⊕ S ⊆ u i R ⊕ S. ThenS = σ(m ⊕ S) ⊆ b := σ(u) i S and hence b = S. Likewise we findm = %(m ⊕ S) ⊆ a := %(u) i R and as m is maximal this impliesa = m or a = R. Thus we get u = a⊕ b = m⊕S or u = a⊕ b = R⊕S.And this means that m ⊕ S is a maximal ideal of R ⊕ S. Likewiseone can see that R ⊕ n is a maximal ideal for any n i S maximal.Conversely consider a maximal ideal u i R ⊕ S. In particular u isprime and hence - by (6) - u = m⊕S or u = R⊕n for some prime idealm i R or n i S respectively. We only regard the case u = m ⊕ S

339

Page 340: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

again, as the other is completely analogous. Thus consider some idealm ⊆ a i R, then u = m⊕S ⊆ a⊕S i R⊕S. By maximality of uthat is u = m⊕ S = a⊕ S or a⊕ S = R⊕ S. Again this yields m = aor a = R and this means that m is maximal.

2

Proof of (1.79):

(i) Let C denote the intersection C = a ∈ R | ∀ i ∈ I : a ∈ Ci , then itis clear that 0, 1 ∈ C, as any Ci is a positive cone and hence 0, 1 ∈ Ci.Likewise, if a, b ∈ C then for any i ∈ I we have a, b ∈ Ci and hencea + b ∈ Ci and a · b ∈ Ci, as Ci was supposed to be a positive cone.Yet this again means a+ b ∈ C and a · b ∈ C, such that C is a positivecone again.

(ii) It remains to show, that cone(A) is well-defined. However, as R itselfis a positive cone and C ⊆ R, hence R is an element of the definingset C ⊆ R | A ⊆ C, C is a positive cone . In particular this set isnon-empty and hence the intersection over all its elements exists.

(iii) Recall the definitions of the image ϕ(C) = ϕ(a) | a ∈ C and thepreimage ϕ−1(D) = a ∈ R | ϕ(a) ∈ D . Now, as 0 = ϕ(0) and 1 =ϕ(1) we get both 0, 1 ∈ ϕ(C) and 0, 1 ∈ ϕ−1(D).

So given any u, v ∈ ϕ(C) there are some a, b ∈ C such that u = ϕ(a)and v = ϕ(b). Thereby u + v = ϕ(a) + ϕ(b) = ϕ(a + b) ∈ ϕ(C) andlikewise u · v = ϕ(a) · ϕ(b) = ϕ(a · b) ∈ ϕ(C) again. Altogether thismeans that ϕ(C) is a positive cone again.

Similarly, if a, b ∈ ϕ−1(D) we have ϕ(a), ϕ(b) ∈ D and thereforeϕ(a+b) = ϕ(a)+ϕ(b) ∈ D which implies a+b ∈ ϕ−1(D). In completeanalogy we find ϕ(a · b) = ϕ(a) · ϕ(b) ∈ D and hence a · b ∈ ϕ−1(D).

2

Proof of (1.80):In (i) to (iv) there is nothing to prove - everything is either trivial or well-known. But we wish to give a short prove for (v): First of all, we have0 ≤ 0 due to the reflexivity (O1) of ≤. Furthermore 0 ≤ 1 is property (P1)of ≤. Thereby 0, 1 ∈ R+. If now a, b ∈ R+, then 0 ≤ a and 0 ≤ b andhence 0 ≤ a + b and 0 ≤ a · b due to properties (P3) and (P4). But thisagain means a+ b and a · b ∈ R+ such that R+ is a positive cone. Supposea ∈ R+ ∩ (−R+) that is 0 ≤ a and 0 ≤ −a. The latter is 0 ≤ 0 − a andhence a ≤ 0 due to (P2). But as also 0 ≤ a the anti-symmetry (O2) impliesa = 0.

Claim number (vi) is fairly easy to see: first of all 0 = 02 and 1 = 12 suchthat 0, 1 ∈ ∆ is satisfied. Property (C2) of cones is satisfied trivially and forproperty (C3) we have to regard a = a2

1 + · · ·+a2m and b = b21 + · · ·+b2n ∈ ∆.

Then it is clear that also ab ∈ ∆, as

ab =m∑i=1

n∑j=1

a2i b

2j =

(m,n)∑(i,j)=(1,1)

(aibj)2

340 17 Proofs - Rings and Modules

Page 341: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

We will now prove (vii) that R[x]+ is a proper cone: As 0 ∈ R+ and lc (0) = 0we get 0 ∈ R[x]+ as well. Likewise 1 ∈ R[x]+. If now f and g are twopolynomials in R[x]+ then we have to show that f + g and f · g are in R[x]+

again. If one of them - say f = 0 - then f + g = g and f · g = 0 and hencethere is nothing to prove. Otherwise let

f(x) = amxm + · · ·+ akx

k

g(x) = bnxn + · · ·+ blx

l

By assumption f , g ∈ R[x]+ we have am and bn ≥ 0. Therefore, if m = nthen lc (f + g) = an + bn ≥ 0. And if m 6= n, say m < n then lc (f + g) =bn ≥ 0. In any case we get lc (f + g) ≥ 0 and hence f + g ∈ R[x]+ again.Also by definition of the multiplication in R[x] we have

(f · g)(x) =∞∑u=0

( ∑p+q=u

f [p] · g[q]

)xu

Thus if u > m+ n then either p > m or q = u− p > n, in any case we havef [p] ·g[q] = 0. And for the same reason all combinations (p, q) with p+q = ufor u = m+n yield f [p] · g[q] = 0 except f [m] · g[n] = am · bn 6= 0, as R is anintegral domain. Hence lc (f · g) = am · bn and as both have been positive,so is lc (f · g) ≥ 0. And this again means f · g ∈ R[x]+ such that R[x]+ is apositive cone as well. To prove that R[x]+ is proper consider some f ∈ R[x]such that f ∈ R[x]+ and −f ∈ R[x]+. Let a := lc (f) then this means a ≥ 0and −a ≥ 0. But due to (v) this implies a = 0 (see above). And the onlypolynomial with lc (f) = 0 is f = 0.

In complete analogy to the arguments above one can show that R[x]− isa proper, positive cone of R[x], as well.

2

Proof of (1.85):First of all for any a ∈ R we have a − a = 0 ∈ C and hence (O1). Alsoif a ≤ b and b ≤ c this means b − a and c − b ∈ C and hence c − a =(c− b) + (b− a) ∈ C such that (O2). In particular, if a ≤ b and b ≤ a thatb − a and −(b − a) = a − b ∈ C, but as C is proper this means b − a = 0and hence (O3). As 1 ∈ C we have 0 ≤ 1 which is (P1), (P2) is satisfiedtrivially: a ≤ b ⇐⇒ b− a ∈ C ⇐⇒ (b− a)− 0 ∈ C ⇐⇒ 0 ≤ b− a. Andat last, for any a and b ∈ R with 0 ≤ a and 0 ≤ b we have a, b ∈ C suchthat a+ b and ab ∈ C which again means a+ b resp. ab ≥ 0.

2

Proof of (1.86):We have already proved that: given a proper positive cone C we can definea positive order a ≤ b by letting b−a ∈ C. And conversely, if ≤ is a positiveorder on R, then R+ = a ∈ R | a ≥ 0 is a proper positive cone in R.Hence both of these maps are well-defined. It remains to show that theyalso are mutually inverse.

Let us first start with a proper positive cone C, then we have to showthat R+ = C. By definition a ≥ 0 means a − 0 = a ∈ C, so this is true

341

Page 342: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

already. Conversely, given a positive order ≤ we have to show that a ≤ b isequivalent to b− a ∈ R+. But by definition this is b− a ≥ 0 such that thisis guaranteed by property (P2) of positive orders.

2

Proof of (1.83):

(i) If a ≥ 0 then |a| = a ≥ 0 is clear. Otherwise, if a ≤ 0 then |a| = −abut as a ≤ 0 (P6) implies −a ≥ 0 and hence |a| ≥ 0.

(ii) If a ≥ 0 then |a| = a ≥ a is clear. Otherwise, if a ≤ 0 then |a| = −abut as a ≤ 0 (P6) implies −a ≥ 0 ≥ a and hence |a| ≥ a.

(iii) If a ≥ 0 then |a| = a and hence |a|2 = a2 is clear. Otherwise, if a ≤ 0then |a| = −a such that |a|2 = (−a)2 = a2 again.

(iv) We have to consider three cases: (1) if a > 0 then |a| = a and sgn(a) =1 such that |a| = a = 1 · a = sgn(a)a. (2) if a = 0, then |a| = 0 andsgn(a) = 0 such that |a| = 0 = 0 · 0 = sgn(a)a. (3) lastly, if a < 0 then|a| = −a and sgn(a) = −1 such that |a| = −a = (−1) · a = sgn(a)a.Thus the equality holds true in any of these three cases.

(ix) If a ≤ b then (as 0 ≤ a) we get a2 ≤ ab due to (P8). Likewise (as 0 ≤ bas well) we find ab ≤ b2. Thus a2 ≤ ab ≤ b2 which implies a2 ≤ b2 dueto the transitivity of ≤. Conversely we start with a2 ≤ b2. Suppose wehad a > b then in particular b ≤ a such that b2 ≤ a2 ≤ b2. This impliesa2 = b2 and hence 0 = b2 − a2 = (a − b)(a + b). As R is an integraldomain this would mean a = b (in contradiction to a > b) or a+b = 0.But again 0 ≤ a = −b ≤ 0 implies a = b = 0 in contradiction to a > b.Thus a > b is false, which only leaves a ≤ b.

(viii) If a = 0 or b = 0 then ab = 0 and hence sgn(ab) = 0 = sgn(a)sgn(b).Now suppose a 6= 0 and b 6= 0 and distinguish 4 cases: (1) if a > 0and b > 0 then ab > 0 due to (P4) and hence sgn(ab) = 1 = 1 · 1 =sgn(a)sgn(b). (2) if 0 < a and b < 0 then 0 > ab due to (P9) suchthat sgn(ab) = −1 = 1 · (−1) = sgn(a)sgn(b). (3) the case a < 0 andb > 0 is completely analogous to case (2), by flipping a and b. (4)if a < 0 and b < 0 then −a > 0 and −b > 0. Now by (1) we getsgn(ab) = sgn((−a)(−b)) = 1 = (−1)(−1) = sgn(a)sgn(b). Thus theequality holds true in any of these cases.

(v) Once (iv) and (viii) have been established this is straightforward:|ab| = sgn(ab)ab = sgn(a)a · sgn(b)b = |a| · |b|.

(vi) Due to (iii) we know |a+ b|2 = (a+ b)2 = a2 + 2ab+ b2. But accordingto (ii) ab ≤ |ab| such that |a + b|2 = a2 + 2ab + b2 ≤ a2 + 2|ab| + b2.We now invoke (v) and (iii) again to find |a+ b|2 ≤ a2 + 2|ab|+ b2 =|a|2 + 2|a||b|+ |b|2 = (|a|+ |b|)2. As both |a| and |b| ≥ 0 we can finallyuse (ix) to deduce |a+ b| ≤ |a|+ |b| as claimed.

(vii) First of all |a| = |b + (a − b)| ≤ |b| + |a − b| according to (vi). Andfrom this we get |a| − |b| ≤ |a− b|. Now in complete analogy we have|b| = |a + (b − a)| ≤ |a| + |b − a| and hence −(|a| − |b|) = |b| − |a| ≤|b− a| = |a− b|. Combining these two cases we see ||a| − |b|| ≤ |a− b|.

342 17 Proofs - Rings and Modules

Page 343: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (1.87):A word in advance: we have N = R+ ∩ nzdR. It is well known (andeasy to see) that the set nzdR of non-zero divisors of R is multiplicativelyclosed. And R+ is multiplicatively closed due to (1) and (4). Hence N ismultiplicatively closed, as an intersection of such sets.

To begin with we have to prove the well-definedness of the relation. Thuswe consider a/m = a′/m′ and b/n = b′/n′. That is u(am′ − a′m) = 0 forsome u ∈ N . In particular u is a non-zero divisor and hence am′ = a′m.Likewise bn′ = b′n. We now have to show that a′/m′ ≤ b′/n′ implies (andby universality hence already is equivalent to) a/m ≤ b/n. By definition

a′

m′≤ b′

n′⇐⇒ a′n′ ≤ b′m′ ⇐⇒ 0 ≤ b′m′ − a′n′

Multiplying the latter inequality with mn ≥ 0 we get 0 ≤ mn(b′m′−a′n′) =b′nmm′− a′mnn′ = bn′mm′− am′nn′. This again yields am′nn′ ≤ bn′mm′,which once again iturns into

a(nm′n′) ≤ (bm′n′)m ⇐⇒ a

m≤ bm′n′

nm′n′=b

n

Thus we truly got a/m ≤ b/n and hence the well-definedness of ≤. In anext step we have to prove that this relation also is a partial order. Thereflexivity a/m ≤ a/m is clear, as am ≤ am. The anti-symmetry is easy aswell: consider a/m ≤ b/n such that a/m ≤ b/n as well. This means an ≤ bmand bm ≤ an such that an = bm. This again is a/m = b/n. It remains toverify the transitivity : consider a/m ≤ b/n and b/n ≤ c/u. That is an ≤ bmand bu ≤ cn. As u and m ≥ 0 we find aun ≤ bum and bum ≤ cnm suchthat aun ≤ cnm. This again yields a/m ≤ (cn)/(un) = c/u.

Now we’re ready for the actual topic of the claim: the positivity of thepartial order ≤. First note that by definition we have

0 ≤ b

n⇐⇒ 0 ≤ b

Thereby property (1) is obvious, as 0/1 ≤ 1/1 ⇐⇒ 0 ≤ 1. And for (2) weconsider a/m ≤ b/n, which is an ≤ bm and hence 0 ≤ bm− an. As we havejust seen this is 0 ≤ (bm−an)/(mn) = b/n−a/m, such that the equivalencyis established. For properties (3) and (4) we are given a/m and b/n ≥ 0.That is a, b ≥ 0, hence ab ≥ 0 and thus (a/m)(b/n) = (ab)/(mn) ≥ 0.Likewise (as m, n ≥ 0 as well) we have an+ bm ≥ 0 such that a/m+ b/n =(an+ bm)/(mn) ≥ 0.

2

Proof of (1.92):

(i) From 1 = e+ f ∈ a + b i R it is clear that a + b = R. Now considerany x ∈ a ∩ b. That is there are a and b ∈ R such that x = ae = bf .Then we get xe = (bf)e = b(fe) = 0 and xf = (ae)f = a(ef) = 0.And thereby x = x1 = x(e+ f) = xe+ xf = 0 such that a∩ b = 0 .

343

Page 344: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) It is clear that Φ : x 7→ (x+ a, x+ b) is a well-defined homomorphismof rings (as it fibers into the canonical epimorphisms x 7→ x + a andx 7→ x + b). Now suppose (x + a, x + b) = Φ(x) = 0 = (0 + a, 0 + b).This means x ∈ a and x ∈ b and hence x ∈ a∩ b = 0 . Thus we havex = 0 which implies kn(Φ) = 0 , which means that Φ is injective. Itremains to prove, that Φ also is surjective: Since a + b = R there aresome e ∈ a and f ∈ b such that e+ f = 1. Now suppose we are givenany (y + a, z + b). Then let x := yf + ze and compute

Φ(x) = (yf + ze+ a, yf + ze+ b)

= (yf + a, ze+ b)

= (yf + ye+ a, ze+ zf + b)

= (y(f + e) + a, z(e+ f) + b)

= (y + a, z + b)

(iii) We will now prove the chinese remainder theorem in several steps. (1)Let us first consider the following homomorphism of rings

ϕ : R→n⊕i=1

R/ai : x 7→ (x+ ai)

We now compute the kernel of ϕ: ϕ(x) = 0 = (0+ai) holds true if andonly if for any i ∈ 1 . . . n we have x + ai = 0 + ai. And of course thisis equivalent, to x ∈ ai for any i ∈ 1 . . . n. And again this can be putas x ∈ a1 ∩ · · · ∩ an. Hence we found kn(ϕ) = a. Now we aim for thesurjectivity of ϕ: (2) As a first approach we prove the existence of asolution x ∈ R of the system ϕ(x) = (1 + a1, 0 + a2, . . . , 0 + an). Sincea1 + aj = R for any j ∈ 2 . . . n we also find a1 + a2 . . . an = R due to(1.56.(iii)). That is there are elements e1 ∈ a1 and f1 ∈ a2 . . . an suchthat e1 + f1 = 1. Hence we find f1 + aj = 0 + aj for any j ∈ 2 . . . n.And also f1 + a1 = (1 − e1) + a1 = 1 + a1. That is x := f1 does thetrick. (3) Analogous to (2) we find f2, . . . , fn ∈ R such that for anyj ∈ 1 . . . n we obtain the system

ϕ(fj) = (δi,j + ai)

(4) Thus if we are now given any a1, . . . , an ∈ R then we let us definex := a1f1 + · · ·+ anfn and we thereby obtain

x+ ai =

n∑j=1

ajfj + ai =

n∑j=1

ajδi,j + ai = aj + ai

Hence ϕ(x) = (ai + ai) and as the ai have been arbitrary this meansthat ϕ is surjective. (5) Thus (1.77.(ii)) yields the desired isomorphy.

2

Proof of (3.44):

(ii) If x, y ∈ kn(ϕ) then ϕ(ax) = α(a)ϕ(x) = α(a)0 = 0 and henceax ∈ kn(ϕ) for any a ∈ R. Likewise ϕ(x+y) = ϕ(x)+ϕ(y) = 0+0 = 0,

344 17 Proofs - Rings and Modules

Page 345: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

such that x + y ∈ kn(ϕ) as well. Hence kn(ϕ) is an R-submodule ofM . Now we have the assumption that α is surjective and consider anyu, v ∈ imϕ, that is u = ϕ(x) and v = ϕ(y) for some x, y ∈ M . Thenu + v = ϕ(x) + ϕ(y) = ϕ(x + y) ∈ imϕ. And if b ∈ S then - dueto the surjectivity of α - we may choose some a ∈ R with b = α(a).Then bu = α(a)ϕ(x) = ϕ(ax) ∈ imϕ again, and hence im(ϕ) is anS-submodule of N .

(iii) ϕ is surjective iff for any y ∈ N there is some x ∈ M such thatϕ(x) = y. In other words that is N ⊆ imϕ. But as im(ϕ) ⊆ N istrue in any case, this is equivalent to im(ϕ) = N .

Likewise ϕ is injective, iff for any x, y ∈ M we get the implicationϕ(x) = ϕ(y) =⇒ x = y. But since ϕ is R-linear, this can also be putas ϕ(y−x) = 0 =⇒ y−x = 0. Let us now substitute p = y−x, thenthis is equivalent to: ϕ(p) = 0 =⇒ p = 0 for any p ∈ M . In otherwords that is kn(ϕ) ⊆ 0 . But as as 0 ⊆ kn(ϕ) is true in anycase, this is equivalent to kn(ϕ) = 0 .

(iv) It suffices to check the general case, as the case of R-modules is in-cluded via α = β = 11 , that is by leaving out α and β. Consider anyx, y ∈ L and any a ∈ R, then it is easy to compute ψϕ(ax + y) =ψ(ϕ(ax + y)) = ψ(α(a)ϕ(x) + ϕ(y)) = β(α(a))ψ(ϕ(x)) + ψ(ϕ(y)) =βα(a)ψϕ(x) + ψϕ(x).

(v) We have to prove that ϕ+ψ and aϕ are homomorphisms of R-modulesagain. To do this consider any x, y ∈M and b ∈ R, then it is easy tocompute (ϕ + ψ)(bx + y) = ϕ(bx + y) + ψ(bx + y) = bϕ(x) + ϕ(y) +bψ(x)+ψ(y) = b(ϕ(x)+ψ(x))+ϕ(y)+ψ(y) = b(ϕ+ψ)(x)+(ϕ+ψ)(y).And as R is commutative, we also get (aϕ)(bx+y) = a(bϕ(x)+ϕ(y)) =abϕ(x) + aϕ(y) = baϕ(x) + aϕ(y) = b(aϕ)(x) + (aϕ)(y).

(vi) We have to prove that ϕ + ψ and aϕ are α-module-homomorphismsagain. To do this consider any x, y ∈ M , a ∈ S and b ∈ R, thenit is easy to compute (ϕ + ψ)(bx + y) = ϕ(bx + y) + ψ(bx + y) =α(b)ϕ(x) + ϕ(y) + α(b)ψ(x) + ψ(y) = α(b)(ϕ(x) + ψ(x)) + ϕ(y) +ψ(y) = α(b)(ϕ + ψ)(x) + (ϕ + ψ)(y). And as S is commutative, wealso get (aϕ)(bx + y) = a(α(b)ϕ(x) + ϕ(y)) = aα(b)ϕ(x) + aϕ(y) =α(b)aϕ(x) + aϕ(y) = α(b)(aϕ)(x) + (aϕ)(y).

(vii) The properties of an R-module-homorphism are easy to see, if a ∈R and (ax), (bx) ∈ R⊕X then %(a(bx)) = %(abx) =

∑x(abx)x =

a∑

x bxx = a%(bx), by construcion of R⊕X , refer to (3.38) for details.Likewise we compute %((ax) + (bx)) = %(ax + bx) =

∑x(ax + bx)x =∑

x axx +∑

x bxx = %(ax) + %(bx). And im% = lhR(X) is clear fromthe definition of lhR(X) and the construction of %. Now % is sur-jective, iff lhR(X) = im% = N and in the light of (3.39.(ii)) thisis precisely the definiton that X generates M . Also we have seenthat % is injective, iff kn(%) = 0 and by construction of % this means∑

x axx = 0 =⇒ ∀x ∈ X : ax = 0. This again is the R-linearindependence of X. If % is bijective, then any x ∈M has a unique rep-resentation (ax) ∈ R⊕X with x = %(ax) =

∑x axx. And by (3.39.(vi))

this is equivalent to X being an R-basis of M .

345

Page 346: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (3.45):In (i) the one direction: ϕ = ψ implies ϕ(x) = ψ(x) for any x ∈ X is trivial.Conversely consider any y ∈ M . As X generates M there are some xi ∈ Xand ai ∈ R such that y = a1x1 + · · · + anxn. Now by the R-linearity of ϕand ψ we find

ϕ(y) = ϕ

(n∑i=1

aixi

)=

n∑i=1

aiϕ(xi)

=n∑i=1

aiψ(xi) = ϕ

(n∑i=1

aixi

)= ψ(y)

And as y ∈M has been chosen arbitarily we have in fact proved ϕ = ψ. Soit remains to prove statement (ii). As B is a basis of M it is clear - dueto(3.39.(vi)) - that the representation y =

∑x axx is unique and hence ϕ is

a well-defined map. It also is easy to see that this ϕ is R-linear (see below).But from this we already get the uniqueness, due to (i). As to R-linearity:given u ∈ R and y =

∑x axx and z =

∑x bxx ∈M we clearly see that

ϕ(uy) = ϕ

(∑x∈B

uaxx

)=

∑x∈B

uaxn(x) = u∑x∈B

axn(x) = uϕ(y)

ϕ(y + z) = ϕ

(∑x∈B

(ax + bx)x

)=

∑x∈B

axn(x) +∑x∈B

bxn(x) = ϕ(y) + ϕ(z)

2

Proof of (3.46):We have to prove that the push-forward ψ∗ and pull-back ϕ∗ truly are ho-morphisms again. As (i) is contained in (iii) and likewise (ii) is contained in(iv) (as α = β = 11 we just have to omit α and β) we will prove the latter twoclaims only. For (iii) we pick up any a ∈ S and ϕ, ϕ′ ∈ mhomα(L,M). Thenfor any x ∈ L we get ψ∗(aϕ + ϕ′)(x) = ψ(aϕ(x) + ϕ′(x)) = β(a)ψϕ(x) +ψϕ(x) = (β(a)ψϕ + ψϕ′)(x). And as this is true for any x ∈ L we gotψ∗(aϕ+ ϕ′) = β(a)ψϕ+ ψϕ′ = β(a)ψ∗(ϕ) + ψ∗(ϕ

′). For (iv) the reasoningis quite similar, pick up any b ∈ T and ψ, ψ′ ∈ mhomβ(L,N). Then forany x ∈ L we get ϕ∗(bψ + ψ′)(x) = (bψ + ψ′)(ϕ(x)) = bψϕ(x) + ψ′ϕ(x) =(bψϕ + ψ′ϕ)(x). And as this is true for any x ∈ L we get ϕ∗(bψ + ψ′) =bψϕ+ ψ′ϕ = bϕ∗(ψ) + ϕ∗(ψ′).

2

Proof of (3.50):

(i) The linearity can be seen with a simple computation based on thecompositions of R2: consider a, b, x and y ∈ R all elements of R, then

346 17 Proofs - Rings and Modules

Page 347: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

α((a, b)+(x, y)) = α(a+x, b+y) = (a+x)+(b+y) = (a+b)+(x+y) =α(a, b) + α(x, y). Likewise α(a(x, y)) = α(ax, ay) = ax + ay = a(x +y) = aα(x, y). However - for example in the case R = Z - we haveα(2 ·1, 1) = α(2, 1) = 2+1 = 3, whereas 2α(1, 1) = 2(1+1) = 2 ·2 = 4.That is in the case R = Z we see that α is definitely not multilinear.

(ii) In this case the multilinearity follows from the definition of the com-positions on R2: µ(a + b, x) = (a + b)x = ax + bx = µ(a, x) + µ(b, x)and µ(ax, y) = (ax)y = a(xy) = aµ(x, y). Likewise we see thatµ(a, x + y) = µ(a, x) + µ(a, y) and µ(x, ay) = aµ(x, y) due to thecommutativity of R. Hence µ is R-multilinear but in the case R = Z

again definitely not R-linear: µ(2 · (1, 1)) = µ(2, 2) = 2 ·2 = 4, whereas2µ(1, 1) = 2(1 · 1) = 2 · 1 = 2.

2

Proof of (3.51):

(i) We will first prove the well-definedness of ϕ: if x + P = y + P , theny − x ∈ P ⊆ ϕ−1(Q) and hence ϕ(y) − ϕ(x) = ϕ(y − x) ∈ Q. Thisagain is ϕ(y) + Q = ϕ(x) + Q, which had to be shown. To provethat ϕ is an R-module-homorphism can be done in a straightforwardcomputation: let a ∈ R and x, y ∈M , then

ϕ (a(x+ P ) + (y + P )) = ϕ((ax+ y) + P ) = ϕ(ax+ y) +Q

= (aϕ(x) + ϕ(y)) +Q

= a (ϕ(x) +Q) + (ϕ(y) +Q)

= aϕ(x+ P ) + ϕ(y + P )

(ii) First of all, let us abbreviate K := knϕ. Then ϕ is well-defined, sinceif x+K = y +K, then k := y − x ∈ K and hence - as ϕ(k) = 0

ϕ(y +K) = ϕ(y) = ϕ(x+ k) = ϕ(x) + ϕ(k) = ϕ(x) = ϕ(x+K)

That ϕ is an R-module homomorphism can be shown in completeanalogy to the argument used in (i):

ϕ (a(x+K) + (y +K)) = ϕ((ax+ y) +K) = ϕ(ax+ y)

= aϕ(x) + ϕ(y) = aϕ(x+K) + ϕ(y +K)

By construction we have imϕ = imϕ, such that the surjectivity of ϕis trivial. It suffices to check the injectivity: so let us regard any x,y ∈ M such that ϕ(x) = ϕ(x + K) = ϕ(y + K) = ϕ(y). That isϕ(y − x) = ϕ(y) − ϕ(x) = 0 and hence y − x ∈ knϕ = K, such thatx+K = y +K.

(iii) Consider the canonical projection π : M → M/P given by π(x) =x + P . Let us denote the restricion of π to Q by ϕ, that is ϕ : Q →M/P : x 7→ x+ P . Then we get

knϕ = x ∈ Q | x ∈ P = P ∩Qimϕ = x+ P | x ∈ Q = P +Q/

P

347

Page 348: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

So by the 1st isomorphism theorem ϕ induces an isomorphism ϕ :Q/knϕ ∼→ imϕ, given to be ϕ(x + knϕ) = x + P and this is preciselythe isomorphism given in the claim.

(iv) Consider the identity 11 : M → M on M . As P ⊆ Q = 11−1(Q) theidentity induces the map ϕ : M/P →M/Q : x+P 7→ x+Q, accordingto (i). For this homomorphism it is clear, that

knϕ = x+ P ∈ Q | x ∈ Q = Q/P

imϕ = x+Q | x ∈M = M/Q

So by the 1st isomorphism theorem ϕ induces an isomorphism ϕ :(M/P )/knϕ ∼→ imϕ, given to be ϕ((x + P ) + knϕ) = x + Q, and thisis precisely the inverse of the isomorphism given in the claim.

2

Proof of (3.47):It is clear that (M,+) is a commutative group, as we did not change theaddition on M . It remains to prove the properties (M) of an R[t]-module.Thus consider any f , g ∈ R[t] and any x, y ∈ M . As ϕ is R-linear so arethe compositions ϕk and hence we can compute

1x = t0x = ϕ0(x) = 11 (x) = x

f(x+ y) =∞∑k=0

f [k]ϕk(x+ y) =∞∑k=0

f [k](ϕk(x) + ϕk(y)

)=

∞∑k=0

f [k]ϕk(x) +∞∑k=0

f [k]ϕk(y) = fx+ fy

(f + g)x =∞∑k=0

(f [k] + g[k])ϕk(x)

=

∞∑k=0

f [k]ϕk(x) +

∞∑k=0

g[k]ϕk(x) = fx+ gx

f(gx) =

∞∑i=0

f [i]ϕi

∞∑j=0

g[j]ϕj(x)

=

∞∑i=0

f [i]∞∑j=0

g[j]ϕi(ϕj(x)

)=

∞∑i=0

∞∑j=0

f [i]g[j]ϕi+j(x)

=∞∑k=0

∑i+j=k

f [i]g[j]

ϕk(x) = (fg)x

2

Proof of (3.21):

348 17 Proofs - Rings and Modules

Page 349: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• We will first verify that the exterior direct product P :=∏iMi is an R-

module again (under the operations given). The well-definedness of theoperations is clear. Thus consider x = (xi), y = (yi) and z = (zi) ∈ P .Then x+(y+z) = (xi+(yi+zi)) = ((xi+yi)+zi) = (x+y)+z whichis the associativity. Likewise x + y = (xi + yi) = (yi + xi) = y + x,which is the commutativity. Letting 0 := (0i), where 0i ∈ Mi is thezero-element, we find x + 0 = (xi + 0i) = (xi) = x, such that 0 ∈ Pis the zero-element. And the negative of x is x = (−xi) ∈ P , asx + x = (xi + (−xi)) = (0i) = 0. Now consider any a, b ∈ R thena(x+ y) = (a(xi + yi)) = ((axi) + (ayi)) = (axi) + (ayi) = (ax) + (ay).Likewise (a + b)x = ((a + b)xi) = ((axi) + (bxi)) = (axi) + (bxi) =(ax) + (bx) and (ab)x = ((ab)xi) = (a(bxi)) = a(bxi) = a(bx). Finally1x = (1xi) = (xi) = x, such that altogether P is an R-module again.

• Let us now verify that the direct sum S :=⊕

iMi is an R-submoduleof the direct product P . Thus we only have to prove that S is closedunder the operations of P . However for any a ∈ R and any x, y ∈ S it isclear that supp(ax) ⊆ supp(x) and supp(x+y) ⊆ supp(x)∪ supp(y).Hence #supp(ax) ≤ #supp(x) <∞ and #supp(x+ y) ≤ #supp(x) +#supp(y) <∞. Thereby ax and x+ y ∈ S again.

• Now consider the R-module-homomorphisms ϕi : Mi → N . By con-struction of S the map ϕ :=

⊕i ϕi : S → N is well-defined. And it also

is an R-module-homomorphism, as for any a ∈ R and for any x = (xi),y = (yi) ∈ S we get ϕ(ax) =

∑i ϕi(axi) =

∑i aϕi(xi) = aϕ(x) and

ϕ(x+ y) =∑

i ϕi(xi + yi) =∑

i(ϕi(xi) + ϕi(yi)) = ϕ(x) + ϕ(y).

• (a) =⇒ (b): by definition of the arbitrary sum of the submodules Piof M and by the assumption M =

∑i Pi we obtain the identity

M =

∑i∈Ω

xi

∣∣∣∣∣ Ω ⊆ I finite, xi ∈ Pi

Thus if we are given any x ∈M there are finitely many xi ∈ Pi (i ∈ Ω)such that x =

∑i xi Letting xi := 0 for i ∈ I\Ω we obtain an extention

(xi) ∈⊕

i Pi satisfying x =∑

i xi. This proves the existence of sucha representation. For the uniqueness suppose

∑i xi =

∑i yi for some

(xi), (yi) ∈⊕

i Pi. Fix any j ∈ I and let

xj :=∑i 6=j

xi and yj :=∑i 6=j

yi ∈ Pj

Then we find xj − yj = yj − xj , but as xj − yj ∈ Pj and yj − xj ∈ Pjwe have xj − yj ∈ Pj ∩ Pj = 0, such that xj = yj . But as j ∈ I hasbeen arbitrary this means (xi) = (yi), which also is the uniqueness.

• (b) =⇒ (a): consider any x ∈ M , by asumption (b) there is some(xi) ∈

⊕i Pi such that x =

∑i xi. And as only finitely many xi are

non-zero this means x =∑

i xi ∈∑

i Pi. And as x has been arbitrary

this already isM =∑

i Pi. Now consider any j ∈ I and any x ∈ Pj∩Pj .As x ∈ Pj we may let xj := xj ∈ Pj and xi := 0 ∈ Pi (for i 6= j) to find

x =∑

i xi. On the other hand we have x ∈ Pj , that is there are some

349

Page 350: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

xi ∈ Pi such that x =∑

i 6=j xi. Finally let xj := 0 then altogetherthis is

∑i xi =

∑i 6=j xi =

∑i xi and due to the uniqueness this means

xi = xi for any i ∈ I. But by choice of xi this is xi = 0 for any i 6= jand hence x =

∑i 6=j 0 = 0, which had to be shown.

2

Proof of (3.22):Let us again abbreviate the exterior direct product by P :=

∏iMi and the

exterior direct sum by S :=⊕

iMi. It is immediately clear that πj : P →Mj

is an R-module-homomorphism: πj(ax + y) = πj((axi + yi)) = axj + yj =aπj(x)+πj(y) Analogously it is clear that ιj is an R-module homomorphism.Also πjιj(xj) = πj(δi,jxj) = δj,jxj = xj such that πjιj = 11 . And fromthis it immediately follows that πj is surjective and that πj is injective.Note that the latter could have also been seen directly. Now consider anyx = (xi) ∈ P , then by definition πi(x) = xi such that x = (xi) = (πi(x)).Conversely consider x = (xi) ∈ S, then by definition ιj(xj) = (δi,jxj) (wherethe index runs over all i ∈ I). Therefore a straightforard computation yields

∑j∈I

ιj(xj) =∑j∈I

(δi,jxj

)=

∑j∈I

δi,jxj

= (xi) = x

2

Proof of (3.26):

(i) It is easy to see that the map given is a homomorphism: (xi) + (yi) =(xi + yi) 7→

∑i(xi + yi) = (

∑i xi) + (

∑i yi) and a(xi) = (axi) 7→∑

i(axi) = a(∑

i xi). And the bijectivity is just a reformulation ofproperty (b) of inner direct sums.

(ii) Let us abbreviate the exterior direct sum by M :=⊕

iMi. That isany x ∈ M is of the form x = (xi) for some xi ∈ Mi of which onlyfinitely many are non-zero. Thus x = (xi) =

∑i ιi(xi) ∈

∑i Pi. And

as x has been arbitrary this means M =∑

i Pi. Now consider any

j ∈ I and any x ∈ Pj ∩ Pj . As x ∈ Pj = ιj(Mj) there is somemj ∈Mj such that (xi) := x = ιj(mj) = (δi,jmj). On the other handthere are pi ∈ Pi (where i 6= j) such that x =

∑i 6=j pi. Likewise we

choose ni ∈ Mi such that pi = ιi(ni). If we finally let nj := 0, thenx =

∑i 6=j ιi(ni) =

∑i ιi(ni) = (ni). Comparing these two expressions

of x we find ni = xi = δi,jmj = 0 for any i 6= j. And as also nj = 0 this

means x = (ni) = (0i) = 0 ∈M . Therefore Pj ∩ Pj = 0, as claimed.

2

Proof of (3.27):

• Clearly the map ϕ 7→ (πjϕιi) is well-defined: as πj and ιi are R-modulehomomorphisms, so is the composition πjϕιi : Mi → Nj . Conversely(ϕi,j) 7→ (

⊕i ϕi,j) is well-defined, too - it just is the Carthesian product

of the induced homomorphisms⊕

i ϕi,j :⊕

iMi → Nj .

350 17 Proofs - Rings and Modules

Page 351: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• And it is also clear, that ϕ 7→ (πjϕιi) is a homomorphism of R-modules: as ιi and πj are homomorphisms of R-modules, so are thepush-forward ϕ 7→ ϕιi and pull-back ψ 7→ πjψ. Combining these wesee that ϕ 7→ πjϕιi is a homomorphism of R-modules. Thus if weregard any ϕ, ψ ∈ mhom(M,N) and any a ∈ R, then aϕ + ψ 7→(πj(aϕ+ψ)ιi) = (aπjϕιi +πjψιi) = a(πjϕιi) + (πjψιi), as the compo-sition in the direct product was defined to be component-wise.

• We will now prove that these maps are mutually inverse, we start byregarding ϕ 7→ (πjϕιi) 7→ ψ := (

⊕i πjϕιi). Thus consider an arbitrary

x = (xi) ∈⊕

iMi then we find

ψ(x) =

(∑i∈I

πjϕιi(xi)

)=

(πjϕ

(∑i∈I

ιi(xi)

))

as πjϕ is an R-module homomorphism. Now recall the identities x =∑i ιi(xi) and y = (πj(y)), inserting these we obtain ψ = ϕ from

ψ(x) =(πjϕ(x)

)= ϕ(x)

• Thus we have to regard (ϕi,j) 7→ (⊕

i ϕi,j) 7→ (ψi,j) := (πj(⊕

i ϕi,j)ιi).We need to verify that for any i ∈ I and j ∈ J we get ϕi,j = ψi,j .Thus fix any i, j and xi ∈Mi and compute

ψi,j(xi) = πj

(⊕a∈I

ϕa,b

)b∈J

(ιi(xi)

)By definition we have ιi(xi) = (δa,ixi), where the index runs over a ∈ I.Inserting this into the definition of

⊕a ϕa,b we continue

ψi,j(xi) = πj

(∑a∈I

ϕa,b(δa,ixi)

)b∈J

=∑a∈I

ϕa,j(δa,ixi)

If a 6= i then δa,ixi = 0 ∈ Ma such that the respective summandvanishes. It only remains the summand of a = i, which is given to beϕi,j(δi,ixi) = ϕi,j(xi). As xi has been arbitrary this proves ψi,j = ϕi,jwhich had to be shown.

• We now prove the claim in the remark to the proposition: that is weconsider ϕi := (ϕi,j) ∈

⊕j mhom(Mi, Nj) and want to show that the

image of the corresponding map ϕ := (⊕

i ϕi,j) is contained in⊕

j Nj .Thus consider any x = (xi) ∈

⊕iMi then we have to verify, that

y := ϕ(x) =

(∑i∈I

ϕi,j(xi)

)∈⊕j∈J

Nj

That is we have to verify that the support of y is finite. To see this letus abbreviate the support of x by Ω := supp(x) ⊆ I, which is a finiteset, as x ∈

⊕iMi. Then it is clear that the support of y satisfies the

351

Page 352: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

following inclusions

supp(y) =

j ∈ J

∣∣∣∣∣ ∑i∈I

ϕi,j(xi) 6= 0

⊆ j ∈ J | ∃ i ∈ I : ϕi,j(xi) 6= 0 ⊆ j ∈ J | ∃ i ∈ I : xi 6= 0 and ϕi,j 6= 0 = j ∈ J | ∃ i ∈ Ω with ϕi,j 6= 0 =

⋃ supp(ϕi) | i ∈ Ω

However the latter is a finite (Ω is finite) union of finite (any supp(ϕi)was assumed to be finite) subsets of J . Hence it is a finite set itselfwhich proves y ∈

⊕j Nj which had been claimed.

2

Proof of (7.15):

• (a) =⇒ (b): consider any element f ∈ h and pick up the homogeneousdecomposition f =

∑d fd. By assumption (a) and as f ∈ h we also

find hd ∈ h ∩ Ad such that f can be decomposed into f =∑

d hd. Asfd and hd ∈ Ad property (3) of graded algebras implies fd = hd ∈ h.

• (b) =⇒ (c): For any h ∈ h let us consider the homogeneous decom-position h =

∑d hd and denote the set H :=

hd | h ∈ h, hd 6= 0

.

Then it is clear that H ⊆ hom(A), by definition of H. And by as-sumption (b) we have H ⊆ h. On the other hand any h ∈ h has beendecomposed as h =

∑d hd ∈ 〈H 〉i. And hence h ⊆ 〈H 〉i ⊆ h.

• (c) =⇒ (a): Consider any f ∈ h = 〈H 〉i, that is there is a finite subsetΩ ⊆ H and elements gh ∈ A (where h ∈ Ω) such that f =

∑h ghh. For

any h ∈ Ω let us abbreviate d(h) := deg(h) and pick up homogeneousdecompositions f =

∑d fd and gh =

∑d gh,d. Then we compute

∑d∈D

fd = f =∑h∈Ω

(∑d∈D

gh,d

)h =

∑d∈D

∑h∈Ω

gh,dh

As gh,d and h are homogeneous elements gh,dh is (either 0 or) a homo-geneous element of degree deg(gh,dh) = deg(gh,d) + deg(h) = d+ d(h).Thus we can split the sum over (d, h) ∈ D × Ω into fibers (c, h) suchthat c+ d(h) = d. Thereby we find∑

d∈Dfd =

∑d∈D

∑c+d(h)=d

gh,ch

So on both sides of this equation we have homogeneous decompositionsof f . Now by property (3) of graded algebrad this means that (for anyd ∈ D) the homogeneous components have to coincide, too. That is

fd =∑

c+d(h)=d

gh,ch ∈ h ∩Ad

352 17 Proofs - Rings and Modules

Page 353: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Hence we have obtained that f can be decomposed as a sum of elementsfd contained in h∩Ad. As f has been arbitrary that is h =

∑d h∩Ad.

And as h ∩ Ad ⊆ Ad it is also clear that (h ∩ Ad) ∩ (∑

c 6=d h ∩ Ac) ⊆Ad ∩ (

∑c 6=dAc) = 0. Altogether we have obtained claim (a).

2

Proof of (7.16):

(i) Let us write out the homogeneous decomposition 1 =∑

d 1d. Then forany homogeneous element h ∈ H - say c := deg(h) - we get

h = h · 1 =∑d∈D

h · 1d

Thereby h · 1d ∈ Ac+d according to property (4). But as h ∈ H ishomogeneous we may compare coefficients (this is property (3)) tofind h =

∑c+d=c h · 1d. Yet D has been assumed to be integral, hence

c + d = c = c + 0 implies d = 0. Thus for any h ∈ H we have foundh = h · 10. Thus consider any f ∈ A and decompose f =

∑d fd into

homogeneous elements. Then we compute

f · 10 =∑d∈D

fd · 10 =∑d∈D

fd = 0

as the fd ∈ H ∪ 0 are homogeneous. Thus we have proved f = f ·10

for any element f ∈ A. In particular 1 = 1 · 10 = 10 ∈ A0.

(ii) Clearly the kernel is an ideal a := kn(ϕ) i A of A, so we only have toprove the graded property. First of all, it is clear that the submodulesa ∩Ad intersect trivially

(a ∩Ac) ∩

∑d6=c

a ∩Ad

⊆ Ac ∩

∑d6=c

Ad

= 0

Thus it remains to show that a truly is the sum of the a∩Ad. Considerany f ∈ a and decompose f into homogeneous components f =

∑d fd

where fd ∈ Ad. Then we get

0 = ϕ(f) =∑d∈D

ϕ(fd)

As ϕ was assumed to be graded, we have ϕ(fd) ∈ Bd again. Thuscomparing homogeneous coefficients in B we find ϕ(fd) = 0 for anyd ∈ D. And this again translated into fd ∈ a such that fd ∈ a ∩ Ad.As f has been arbitrary this also shows

a =∑d∈D

a ∩Ad

(iii) As a i A is an ideal and A is an R-algebra, a a A even is anR-algebra-ideal. And hence the quotient A/a is an R-algebra again.

353

Page 354: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And by construction we have 0 + a 6∈ hom(A/a). Let us now prove thewell-definedness of deg : hom(A/a)→ D: suppose we are given g andh ∈ A such that g+a = h+a ∈ hom(A/a). That is f := g−h ∈ a. Letus now denote b := deg(g) and c := deg(h) and decompose f =

∑d fd

(where fd ∈ a ∩Ad by assumption). Then we find

∑d∈D

g if d = b0 if d 6= b

= g f + h =∑d∈D

fc + h if d = cfd if d 6= c

Now suppose we had b 6= c then comparing coefficients we would findg = fb ∈ a in contradiction to g + a ∈ hom(A/a). Thus we havedeg(g) = b = c = deg(h) and hence deg : hom(A/a) → D is well-defined. We will now prove the identity(

A/a

)d

= Ad + a/a

For the inclusion ”⊆ ” we are given some h + a ∈ (A/a)d. If therebyh+ a = 0 + a then h ∈ a and hence h+ a ∈ (Ad + a)/a. If on the otherhand h+ a 6= 0 + a then we have deg(h) = d by assumption and henceh ∈ Ad. Altogether h+a ∈ (Ad+a)/a again. Conversely ” ⊇ ” considerany h ∈ Ad and any f ∈ a. Then we have to show (h+f)+a ∈ (A/a)d.If h = 0 then (h+ f) + a = 0 + a ∈ (A/a)d is clear. And if h 6= 0 thendeg(h) = d by assumption such that (h + f) + a = h + a is of degreed, as well. Thus we have established the equality and in particular(A/a)d is an R-submodule of A/a. Now consider g ∈ Ac, h ∈ Ad anda, b ∈ a. Then (g + a)(h+ b) = gh+ (ah+ gb+ ab) ∈ Ac+d + a. Andas all these elements have been arbitrary this also proves property (4)of graded algebras, that is (A/a)c(A/a)d ⊆ (A/a)c+d. Now

A/a =

∑d∈D

Ad + a/a

is easy to see: if we are given any f+a ∈ A/a then we may decomposef =

∑d fd into homogeneous components. Thereby f+a =

∑d(fd+a).

This already yields ”⊆ ” and the converse inclusion is clear. Now fixany c ∈ D then it remains to prove

(Ac + a/

a

)∩

∑d 6=c

Ad + a/a

= 0 + a

Thus consider any f + a contained in the intersection. In particularthat is f + a ∈ (Ac + a)/a and hence there is some fc ∈ Ac such thatf +a = fc+a. And likewise there are fd ∈ Ad (where d 6= c) such thatf + a =

∑d6=c fd + a. Thus we find

a := fc −∑d6=c

fd ∈ a =⊕d∈D

a ∩Ad

as a has been assumed to be a graded ideal we can decompose a intoa =

∑d ad where ad ∈ a∩Ad. Comparing the homogeneous coefficients

354 17 Proofs - Rings and Modules

Page 355: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

of a in A we find fc = ac ∈ a∩Ac. In particular f +a = fc+a = 0+a,which had to be shown.

(iv) As U is multiplicatively closed U−1A is a commutative R-algebra un-der a(f/u) = (af/u). And from the construction of hom(U−1A) wealso have 0 = 0/1 6∈ hom(U−1A). As D was assumed to be a groupdeg : hom(U−1A) → D is a well-defined map. (If D only was amonoid we would have to regard deg : hom(U−1A) → grD given bydeg(h/u) := (deg(h),deg(u)) instead). Next we prove

(U−1A

)d

=

h

u

∣∣∣∣ u ∈ U, h ∈ Ad+deg(u)

The inclusion ” ⊇ ” is easy: the case h/u = 0/1 is clear and ifh/u 6= 0/1 we particularly have h 6= 0. Hence deg(h) = deg(u) + dby assumption on h and hence deg(h/u) = deg(h)− deg(u) = d. Theconverse ”⊆ ” is analogous: the case h/u = 0/1 is clear again, as0 ∈ Ad+deg(u). And if h/u 6= 0/1 then d = deg(h/u) = deg(h)−deg(u)and hence h ∈ Ad+deg(u) again. So we have to verify that (U−1A)dis an R-submodule. Suppose a ∈ R and g/u, h/v ∈ (U−1A)d. Thenag ∈ Ad+deg(u) and hence a(g/u) ∈ (U−1A)d again. Furthermore

g

u+h

v=

gv + hu

uv

As deg(uv) = deg(u) + deg(v) we have gv and hu ∈ Ad+deg(uv) thusgv + hv ∈ Ad+deg(uv) again and this means (gv + hu)/uv ∈ (U−1A)d.So let us now verify property (4) of graded algebras. To do this con-sider g/u ∈ (U−1A)c and h/v ∈ (U−1A)d. Then we have to verify(gh)/(uv) = (g/u)(h/v) ∈ (U−1A)c+d. If (gh)/(uv) = 0 we are done,else we have deg(g) = c + deg(u) and deg(h) = d + deg(v). Thusdeg(gh) = deg(g)+deg(h) = c+deg(u)+d+deg(v) = (c+d)+deg(uv).And this already is (gh/(uv) ∈ (U−1A)c+d. It remains to verify prop-erty (3) of graded algebras. Given any f/u ∈ U−1A let us pick up a ho-mogeneous decomposition f =

∑d fd of f in A. Then f/u =

∑d fd/u

and as fd/u ∈ (U−1A)d−deg(u) and as f has been arbitrary this meansU−1A =

∑d(U

−1A)d. Thus it only remains to prove

(U−1A

)d∩

∑c 6=d

(U−1A

)c

= 0

Thus consider some h/v contained in the intersection. We have toprove h/v = 0/1 thus let us suppose h/v 6= 0/1 and deduce a con-tradiction. As h/v ∈ (U−1A)d that is h ∈ Ad+deg(v). And likewisethere is some finite subset Ω ⊆ D \ c and uc ∈ U , gc ∈ Ac+deg(uc)

(where c ∈ Ω) such that h/v =∑

c gc/uc. Now let u :=∏c uc ∈ U and

uc := u/uc ∈ U then we compute

h

v=

∑c ∈ Ω

gcuc

=1

u

(∑c∈Ω

gcuc

)

355

Page 356: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

That is there is some w ∈ U such that uwh = vw∑

c gcuc. Note thatuwh and the vwgcuc are homogeneous elements of A, where

deg(vwgcuc) = deg(v) + deg(w) + c+ deg(uc) +∑b6=c

deg(ub)

= deg(v) + deg(w) + c+∑c∈Ω

deg(uc)

= deg(uvw) + c

Hence all the vwgcuc have different degrees and hence form a homo-geneous decomposition of uwh. This however is homogeneous too, ofdegree deg(uwh) = deg(uw) + d + deg(v) = deg(uvw) + d. As c 6= dthis is yet another homogeneous decomposition. Hence uwh = 0 whichwould mean h/v = 0/1, a contradiction.

(v) Consider any f ∈√a that is fk ∈ a for some k ∈ N. If k = 0 then

1 ∈ a and hence a = A. This would imply√a = A, too, which clearly

is a graded ideal again. Thus let us consider the case k ≥ 1. Pick upthe hoomogeneous decomposition f =

∑d fd of f , then by (7.15) it

suffices to check fd ∈√a for any d ∈ N. Now compute

fk =∑α∈Nk

fα1 . . . fαk=

∑d∈N

∑|α|=d

fα1 . . . fαk∈ a

where |α| := α1 + · · ·+ αk ∈ N. By definition of graded algebras it isclear that fα1 . . . fαk

∈ A|α| and thus we have found the homogeneous

decomposition of fk. As a is a graded ideal, this implies

∀ d ∈ N :∑|α|=d

fα1 . . . fαk∈ a

We will now prove that fd ∈√a by induction on d. For d = 0 we

have |α| = 0 = 0 and hence fk0 =∑|α|=0 fα1 . . . fαk

∈ a. Thussuppose d ≥ 1, then we define the following function

µ : |α| = kd → N : α 7→ minαi | i ∈ 1 . . . k

It is clear that µ(α) ≤ d (as else |α| ≥ k(d+1) > kd) and that µ(α) = dis equivalent to α = (d, . . . , d) (as else |α| > kd). Hence we get

fkd =∑

µ(α)=d

fα1 . . . fαk

=∑|α|=kd

fα1 . . . fαk−∑

µ(α)<d

fα1 . . . fαk

But we have already seen from the gradedness of a that the first term(the sum over |α| = kd) is contained in a again. And if µ(α) < dthen the induction hypothesis yields fµ(α) ∈

√a. Yet it is clear that

fµ(α) | fα1 . . . fαkand hence fα1 . . . fαk

∈√a again. Thus the second

term (the sum over µ(α) < d) is contained in√a, too. And this implies

fkd ∈√a and hence even fd ∈

√a, which had to be shown.

356 17 Proofs - Rings and Modules

Page 357: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (7.19):

(i) If h ∈ Ad but h 6= 0 then the homogeneous decomposition h =∑

c hchas precisely one non-zero entry h = hd at c = d (by comparingcoefficients). Thus deg(h) = max d and this clearly is d.

(ii) If f ∈ hom(A) with f ∈ Ad then we have just argued in (i) thatdeg(f) = max d = d. And likewise ord(f) = min d = d whichyields ord(f) = deg(f). Conversely suppose d := ord(f) = deg(f),then we use a homogeneous decomposition f =

∑c fc again. By con-

struction, if c < d = ord(f) or d = deg(d) < c then fc = 0. Thus itonly remains f = fd ∈ Ad and hence f ∈ hom(A).

(iii) As ord(f) is the minimum and deg(f) is the maximum over the com-mon set d ∈ D | fd 6= 0 it is clear, that ord(f) ≤ deg(f). Now leta := deg(f) and b := deg(g). Then (by construction of the degree) thehomogeneous decompositions of f and g are of the form

f =∑c≤a

fc and g =∑d≤b

gd

Thus the product fg is given by a sum over two indices c ≤ a andd ≤ b. Assorting the (c, d) according to the sum c+ d we find that

fg =

∑c≤a

fc

∑d≤b

gd

=∑c≤a

∑d≤b

fcgd =∑e∈D

∑c+d=e

fcgd

Note that thereby fcgd ∈ Ac+d = Ae. Thus we have found the ho-mogeneous decomposition of fg again. But as D was assumed to bepositive c ≤ a and d ≤ b implies e = c+ d ≤ a+ d ≤ a+ b. Thus thehomogeneous decomposition is of the following form

fg =∑e≤a+b

( ∑c+d=e

fcgd

)

And this of course means deg(fg) ≤ a+b = deg(f)+deg(g). The claimfor the order can be shown in complete analogy. Just let a := ord(f)and b := ord(g). Then f and g can be written as

f =∑c≥a

fc and g =∑d≥b

gd

Thus in complete analogy to the above one finds that the homogeneousdecomposition of fg is of the following form (which already yieldsord(fg) ≥ a+ b = ord(f) + ord(g))

fg =∑e≥a+b

( ∑c+d=e

fcgd

)

357

Page 358: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) We commence with the proof given in (iii), that is a = deg(f) andb = deg(g). In particular fa 6= 0 and gb 6= 0. But as A is now assumedto be an integral domain this yields fagb 6= 0. We have already foundthe homogeneous decomposition of fg to be

fg =∑e≤a+b

( ∑c+d=e

fcgd

)

So let us take a look at the homogeneous component (fg)a+b of degreea+ b ∈ D. Given any c ≤ a and d ≤ b we get the implication

c+ d = a+ b =⇒ (c, d) = (a, b)

Because if we had c 6= a then c < a and hence c+ d < a+ d ≤ a+ b,as D was assumed to be strictly positive. Likewise d 6= b would implyc+ d < a+ b in contradiction to c+ d = a+ b. Thus the homogeneouscomponent (fg)a+b is given to be

(fg)a+b =∑

c+d=a+b

fcgd = fagb 6= 0

In particular we find deg(fg) ≥ a + b = deg(f) + deg(g) and hencedeg(fg) = deg(f) + deg(g). By replacing the above implication withc ≥ a, d ≥ b and c+ d = a+ b implies (c, d) = (a, b) the claim for theorder ord(fg) = ord(f) + ord(g) can be proved in complete analogy.

(v) As D is strictly positive it already is integral, according to (7.5.(iii)).Hence by (7.16.(i)) we already know 1 ∈ A0 and hence deg(1) = 0 ac-cording to (i). And as 1 is homogeneous we also get ord(1) = deg(1) =0, according to (ii). Now suppose f ∈ A∗ is an invertible element ofA. Then we use (iv) to compute

0 = deg(1) = deg(ff−1) = deg(f) + deg(f−1)

0 = ord(1) = ord(ff−1) = ord(f) + ord(f−1)

And as ord(f) ≤ deg(f) and ord(f−1) ≤ deg(f−1) (due to (iii)) thepositivity of D can be used to find the following estimates

0 = ord(f) + ord(f−1) ≤ deg(f) + ord(f−1)

≤ deg(f) + deg(f−1) = 0

In particular we find ord(f) + ord(f−1) = deg(f) + ord(f−1). Addingord(f) this equation turns into ord(f) = deg(f) and by (ii) again thismeans that f ∈ hom(A) is homogeneous.

(vi) Let us suppose h = fg for some g ∈ A. As h ∈ hom(A) we haveh 6= 0 and this also implies f , g 6= 0. We will assume that f was nothomogeneous and derive a contradiction. Recall that by (iii) f notbeing homogeneous is equivalent to ord(f) < deg(f). But if that was

358 17 Proofs - Rings and Modules

Page 359: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

true we could use (iv) to find the contradiction

ord(h) = ord(fg) = ord(f) + ord(g)

< deg(f) + ord(g) ≤ deg(f) + deg(g)

= deg(fg) = deg(h) = ord(h)

2

Proof of (4.14):

(i) Clearly, if b 6∈ im(A), then Ax = b has no solution at all, hence L(Ax =b) = ∅. Conversely if b ∈ im(A) then we can pick up some u ∈ Rn withAu = b. Now if v ∈ kn(A) then A(u+v) = Au+Av = b+0 = b. And asv has been arbitrary, this means u+ kn(A) ⊆ L(Ax = b). Converselyconsider some x with Ax = b, then A(x − u) = Ax − Au = b − b = 0and hence v := x− u ∈ kn(A). Again, as x has been arbitrary that isL(Ax = b) ⊆ u+ kn(A).

(ii) First of all let us compute DT−1 = SATT−1 = SA. Further note that- due to the invertibility of S - we have the equivalency

b = Ax ⇐⇒ Sb = SAx = DT−1x

Now for (a) =⇒ (b): let us define z := T−1x, then Dz = DT−1x =SAx = Sb. And x = TT−1x = Tz is clear, as well. Conversely for(b) =⇒ (a): if x = Tz then z = T−1Tz = T−1x and hence alsoSb = Dz = DT−1x and hence b = Ax by the equivalency above.

(iii) By definition x ∈ L(Ax = b) means that Ax = b. As we have seen in(ii) this is equivalent to x = Tz and Dz = Sb for some z ∈ Rn. Againthis translates into x = Tz and z ∈ L(Dx = Sb). And this is nothingbut x ∈ T L(Dx = Sb).

(iv) Due to (i) we know that L(Dx = Sb) either is empty (if Sb 6∈ im(D))or given to be u+ kn(D) for some special solution Du = Sb. Now by(iii) we have L(Ax = b) = RL(Dx = Sb) = R(u+kn(D)). And since amatrix always defines an R-linear operation, this yields L(Ax = b) =Ru+R kn(D).

2

Proof of (4.19):We will prove the following claim by induction on the number m of rows ofthe matrix A: using the Gaussian algorithm one obtains an (m×m)-matrixS and an (m× n)-matrix E such that E is in echelon-form. The fact, thatS is a product of elementary matrices is clear from the algorithm - no othermatrices are involved.

In the case A = 0 the algorithm already terminates at step (2), so weare left with S = 11m and E = A = 0. In this case E = 0 trivially is inechelon-form and E = SA is clear.

So let us now assume A 6= 0. We first anchor the induction for m = 1.As A 6= 0 we necessarily have s = 1 = r and the pivot step is skipped.

359

Page 360: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Likewise the main step (5) is void, as r + 1 . . .m is empty. So again E = Ais in echelon-form and E = SA. And as m = r the algorithm terminates.

Let us now assume that the claim has been proved for any matrix Bwith less than m rows. Then we need to show, that the claim holds truefor A as well. The algorithm starts with row r = 1. And as A 6= 0 thealgorithm does not terminate in (2) but targets u and s such that u = µ(A)and µ(rowsA) = u. In case s 6= r = 1 the rows 1 and s are interchangedE = Pr,sA = SA. So after step (4) we can be sure, that µ(row1E) = µ(E)is minimal. That is the first u − 1 columns of E are zero. Let us denotep = m− 1 and q := n− u then E is of the form

E =

0 · · · 0 x0 x1 · · · xq0 · · · 0 y1 h1,1 · · · h1,q...

......

......

0 · · · 0 yp hp,1 · · · hp,q

That is x = (x0, x1, . . . , xq) where xj = e1,u+j and y = (y1, . . . , yp) whereyi = ei+1,u and H = (hi,j) where hi,j = ei+1,u+j . Now the main step (5) isinitiated. For any i ∈ 2 . . .m the following row-operations are performed

rowiE := er,u(rowiE)− ei,u(rowrE)

Specifically for the u-th column of E (that is the u-th entry of rowiE)this is [rowiE]u = er,uei,u − ei,uer,u and as R is commutative this means[rowiE]u = 0. Hence after the main step the matrix E is turned into

E =

0 · · · 0 x0 x1 · · · xq0 · · · 0 0 b1,1 · · · b1,q...

......

......

0 · · · 0 0 bp,1 · · · bp,q

For some matrix B = (bi,j) where i ∈ 1 . . . p = m − 1 and j ∈ 1 . . . q. Nowthe Gaussian algorithm iterates with r = 2. As B has m− 1 rows only, theinduction hypothesis yields that the Gaussian algorithm will also transformB into echelon form. So after the algorithm terminates B is in echelon formi.e. there is some r ∈ 1 . . .m− 1 such that µ(row1B) < · · · < µ(rowrB) andfor all i > r we get rowiB = 0. As E is in the form presented in the equationabove, we have µ(row1E) = u < u+ µ(row1B) = µ(row2E) and hence

µ(row1E) < . . . < µ(rowr+1E)

And for i > r + 1 we have µ(rowi−1B) = 0 and therefore µ(rowiE) = 0due to the shape of E. That is E is in echelon form, as claimed. The factthat E = SA is clear, as all the operations (multiplications with elementarymatrices from the left) on A are performed on S simultaneously.

2

Proof of (2.4):

(i) Let us abbreviate u :=⋃A, as A is non-empty there is some a ∈ A.

And as a i R is an ideal we have 0 ∈ a ∈ A and hence 0 ∈ u. If now

360 17 Proofs - Rings and Modules

Page 361: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

a, b ∈ u then - by construction - there are a, b ∈ A such that a ∈ aand b ∈ b. As A is a chain we may assume a ⊆ b and hence a ∈ b,too. Therefore we get a + b ∈ b and −a, −b ∈ b, such that a + b, −aand −b ∈ u again. And if finally r ∈ R then ra ∈ a, such that ra ∈ u.

(ii) Let Z :=b i R | a ⊆ b 6= R

, then Z 6= ∅ since a ∈ Z. Now let

A ⊆ Z be a chain in Z, then by (i) we know that u :=⋃A i R

is an ideal of R. Suppose u = R, then 1 ∈ u and hence there wouldbe some b ∈ A with 1 ∈ b. But as b i R is an ideal this wouldmean b = R in contradiction to b ∈ A ⊆ Z. Hence we find u ∈ Zagain. But by construction we have b ⊆ u for any b ∈ A, that is uis an upper bound of A. Hence by the lemma of Zorn there is somemaximal element m ∈ Z∗. It only remains to show that m is maximal:as m ∈ Z it is an non-full ideal m 6= R. And if R 6= b i R is anothernon-full ideal with m ⊆ b then a ⊆ m ⊆ b implies b ∈ Z. But now itfollows that b = m since m ∈ Z∗, b ∈ Z and m ⊆ b.

(iii) Let a ∈ R∗ be a unit of R and m i R be a maximal ideal. Thena 6∈ m, as else 1 = a−1a ∈ m and hence m = R. From this we getR∗ ⊆ R\

⋃smaxR. For the converse inclusion we go to complements

and prove R \R∗ ⊆⋃

smaxR instead. Thus let a ∈ R be a non-unita 6∈ R∗, then 1 6∈ aR and hence aR 6= R. Thus by (i) there is amaximal ideal m i R containing a ∈ aR ⊆ m ⊆

⋃smaxR. Thus

we also have a ∈⋃

smaxR which had to be proved.

(iv) If R = 0 is the zero-ring, then R = 0 is the one and only ideal of R.But as this ideal is full, it cannot be maximal and hence smaxR = ∅.If R 6= 0 then a := 0 is a non-full ideal of R. Hence by (ii) there isa maximal ideal m ∈ smaxR containing it. In particular smaxR 6= ∅.

2

Proof of (3.12):Of course we will prove the claim using a Zornification: Let us define thecollection of all proper submodules of N , that is

Z := Z ≤m N | P ⊆ Z,Z 6= N

As P ∈ Z this collection is non-empty. Also it is clear that Z is partiallyordered under the inclusion relation ”⊆ ”. If now (Zi) ⊆ Z is a chain in Z(that is a totally ordered subset), we take Z :=

⋃i Zi. Then Z ≤m N is a

submodule again: if x, y ∈ Z then there are some i, j ∈ I such that x ∈ Ziand y ∈ Zj . As these sets are totally ordered we may assume Zi ⊆ Zjwithout loss of generality. And hence x+ y ∈ Zj ⊆ Z again. Also if a ∈ R,then ax ∈ Zi ⊆ Z.

Now suppose Z = N , in particular for any k ∈ 1 . . . n we have xj ∈ N = Z.Therefore for any k ∈ 1 . . . n there is some i(k) ∈ I such that xk ∈ Zi(k). Asthe Zi are totally ordered we may single out the maximal element amongZi(1), . . . , Zi(n). That is there is some j ∈ I such that Zi(k) ⊆ Zj forany k ∈ 1 . . . n. In particular xk ∈ Zi(k) ⊆ Zj and as Zj is an R-module we find lh x1, . . . , xn ⊆ Zj . Now, as also P ⊆ Zj we get

361

Page 362: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

N = P + lh x1, . . . , xn ⊆ Zj ⊆ N , that is Zj = N . But this is incontradiction to Zj ∈ Z, such that we necessarily have Z 6= N .

Let us subsume: we have found a submodule Z ≤m N with Z 6= N andclearly also P ⊆ Z. That is Z ∈ Z again. But by construction Z is anupper bound (for any i ∈ I we have Zi ⊆ Z) of the chain (Zi). Hencewe may apply Zorn’s lemma to find a maximal element M ∈ Z∗. So byconstruction of M ∈ Z we have properties (1) and (2) of the claim. And ifM ⊆ Q ⊆ N for some submodule Q, then either Q = N or Q ∈ Z andhence M = Q, due to the maximality of M , and this is (3).

2

Proof of (3.35):

(i) Because of (D1) (that is x ∈ T =⇒ T ` x) we have the obviousinclusions S ⊆ T ⊆ 〈T 〉. That is we have T ` S. If now x ∈M withS ` x, then by (D2) we also get T ` x. And this is just a reformulationof the claim 〈S 〉 ⊆ 〈T 〉.

(ii) We will first prove 〈S 〉 =⋂P ∈ sub(M) | S ⊆ P . For the inclusion

”⊆ ” we are given any x ∈ 〈S 〉. Then we need to show that for anyP ∈ sub(M) with S ⊆ P we get x ∈ P . As P ∈ sub(M) there is someT ⊆ M with P = 〈T 〉. Then S ⊆ P translates into S ⊆ 〈T 〉, whichis T ` S. Together with S ` x now property (D2) implies T ` x.And this is x ∈ 〈T 〉 = P , which had to be shown. Conversely for” ⊇ ” we are given some x ∈ M such that for any P ∈ sub(M) weget S ⊆ P =⇒ x ∈ P . Then we may simply choose P := 〈S 〉,then S ⊆ P is satisfied due to (D1). Thus by assumption on x we getx ∈ P = 〈S 〉, which had to be shown.

Thus let us now prove P = 〈P 〉, where we abbreviated P := 〈S 〉. Theinclusion ”⊆ ” is clear by (D1) again. And by what we have shown,we have 〈P 〉 =

⋂Q ∈ sub(M) | P ⊆ Q . But as P ∈ sub(M) with

P ⊆ P this trivially yields 〈P 〉 ⊆ P , the converse inclusion.

(iii) (a) =⇒ (c): by assumption there is some s ∈ S such that S \ s ` s.Due to (D3) there is some finite subset Ss ⊆ S \ s such that Ss ` s.Now let S0 := Ss∪ s , then S0 is finite again, as #S0 = #Ss+1 <∞and S0\ s = Ss. In particular S0\ s ` s, that is S0 is `depenedent.(c) =⇒ (b): consider S0 ⊆ S ⊆ T , by assumption there is somes ∈ S0 such that S0 \ s ` s. However S0 \ s ⊆ T \ s , such thatby (i) we also get T \ s ` s. That is T is `dependent. (b) =⇒ (a)finally is trivial, by letting T := S.

(iv) (a) =⇒ (b): suppose S was `dependent, that is there was some s ∈ S,such that S \ s ` s. As S ⊆ T we also have S \ s ⊆ T \ s ,hence we find S \ s ` s due to (i). But as also s ∈ S ⊆ T this isa contradiction to T being `independent. (b) =⇒ (c) is trivial (afinite subset is a subset). (c) =⇒ (a): suppose there was some x ∈ Tsuch that T \ x ` x. Then by (D3) there would be a finite subsetTx ⊆ T \ x such that Tx ` x. Now let T0 := Tx ∪ x , then T0 isfinite again, as #T0 = #Tx + 1 <∞. Also T0 \ x = Tx ` x, that isT0 is `dependent in contradiction to (c).

362 17 Proofs - Rings and Modules

Page 363: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) (a) =⇒ (b): clearly S ⊂ T := S ∪x , such that S is `independentby assumption (a) and (iv). Now suppose we had S ` x, then clearlyT \ x = S ` x. That is T is `dependent in contradiction to theassumption. (b) =⇒ (a): if we had x ∈ S, then by property (D1) alsoS ` x, in contradiction to the assumption. Now suppose that S ∪x was `dependent. That is there would be some s ∈ S ∪ x such that

S′ :=(S ∪ x

)\ s ` s

We now distinguish two cases: (1) if s = x then S′ = S and thereby wewould get S = S′ ` s = x, a contradiction to S 0 x. (2) if s 6= x thens ∈ S∪x implies s ∈ S. Now let R := S \ s , that is S = R∪ s and S′ = (S ∪ x ) \ s = R ∪ x . This again means

S′ \ x = R = S \ s

As S is `independent we now have S′ \ x = S \ s 0 s. Becauseof x ∈ S′, S′ ` s and S′ \ x 0 s property (D4) implies

S =(S \ s

)∪ s =

(S′ \ x

)∪ s ` x

This is a contradiction to S 0 x again, thus in both cases we havederived a contradiction, that is S ∪ x is `independent, as claimed.

(vi) Let us abbreviate the chain’s union by T :=⋃i Si and suppose T would

be `dependent. Then by (iii) there would be a finite subset T0 ⊆ T ,such that T0 is `dependent, already. Say T0 = x1, . . . , nn , where forany k ∈ 1 . . . n we have xk ∈ Si(k) for some i(k) ∈ I. As Si | i ∈ I istotally ordered under ⊆ , so is the finite subset

Si(k) | k ∈ 1 . . . n

.

Without loss of generality we may assume, that Si(n) is the maximalelement among the Si(k). Hence we find xk ∈ Si(k) ⊆ Si(n) for anyk ∈ 1 . . . n and this again means T0 ⊆ Si(n). Yet as T0 was `dependentthis implies, that Si(n) is `dependent (by (iii) again), a contradiction.Hence T is `independent, as claimed.

(vii) Not surprisingly we will prove the claim using a zornification based on

Z := P ⊆ M | S ⊆ P ⊆ T, P is `independent

Of course Z is non-empty, as S ∈ Z, and it is partially ordered underthe inclusion relation ⊆ . Due to (vi) any ascending chain (Si) admitsan upper bound, namely T :=

⋃i Si ∈ Z. Thus by the lemma of

Zorn Z contains a maximal element B. Because of B ∈ Z we haveS ⊆ B ⊆ T and B is `independent. Now suppose there was somex ∈ T with B 0 x. However, as B is `independent and B 0 x, wefind by (v) that even B ∪ x is `independent. But as x 6∈ B wehave B ⊂ B ∪ x ⊆ T . Thus we have found a contradiction to themaximality of B. This contradiction can only be solved, if T ⊆ 〈B 〉.And using (i) and (ii) we thereby get M = 〈T 〉 ⊆ 〈〈B 〉 〉 = 〈B 〉 ⊆ M .Altogether B is ` independent and M = 〈B 〉 - it is a `basis.

(viii) First suppose x = b, then B′ = B and hence there is nothing to

363

Page 364: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

show. Thus assume x 6= b. The proof now consists of two parts:(1) as B \ b ⊆ B′ ⊆ 〈B′ 〉 and B′ ` b by assumption, we haveB = (B \ b ) ∪ b ⊆ 〈B′ 〉. And thereby M = 〈B 〉 ⊆ 〈〈B′ 〉 〉 =〈B′ 〉 ⊆ M , according to (ii). That is M = 〈B′ 〉. (2) as B\ b ⊆ B isa subset and B is `independent, so is B\ b , due to (iv). Now supposeB \ b ` x, then we would get B′ = (B \ b ) ∪ x ⊆ 〈B \ b 〉.And hence b ∈ M = 〈B′ 〉 ⊆ 〈〈B \ b 〉 〉 = 〈B \ b 〉, accordingto (1) and (ii) once more. But this is B \ b ` b in contradictionto the `independence of B. Thus we have B \ b 0 x, as B \ b also is `independent (v) now implies, that B′ = (B \ b ) ∪ x is`independent. Combining (1) and (2) we found that B′ is a `basis.

(ix) (a) =⇒ (b): M = 〈B 〉 is clear by definition of a `basis. Nowconsider some subset S ⊂ B, that is there is some x ∈ B, with x 6∈ S.As B is `independent and x ∈ B we get B \ x 0 x, in other wordsx 6∈ 〈B\x 〉. And as S ⊆ B\x (i) implies x 6∈ 〈S 〉. In particular〈S 〉 6= M , as claimed.

(b) =⇒ (a): as M = 〈B 〉 by assumption it only remains to verify the`independence of B. Thus assume, there was some x ∈ B such thatS := B \x ` x. As S ⊆ 〈S 〉 and x ∈ 〈S 〉, we have B = S∪x ⊆〈S 〉. And thereby M = 〈B 〉 ⊆ 〈〈S 〉 〉 = 〈S 〉 ⊆ M according to (ii).This is M = 〈S 〉 even though S ⊂ B, a contradiction.

(ix) (a) =⇒ (c): suppose B is a `basis, that by definition B already is`independent. Thus suppose B ⊂ S, that is we may choose somes ∈ S \ B. As s ∈ M = 〈B 〉 we get B ` s. And as B ⊆ S \ s (i)also yields S \ s ` s. That is S is ` dependent, as claimed.

(c) =⇒ (a): as B is assumed to be `independent, it remains to showM = 〈B 〉. Thus suppose we had 〈B 〉 ⊂ M , that is there was somes ∈ M such that B 0 s. Then by (v) we find that S := B ∪ s isindependent, as well - a contradiction.

(x) Let us denote m := |A| and n := |B|, then we need to show m = n. Inthe proof we will have to distinguish two cases: (I) m or n is finite and(II) both m and n are infinite. We will deal with (I) in the followingsteps (1) to (4), whereas (II) will be treated in (5)

(1) Without loss of generality we may assume m ≤ n (interchangingA and B if not). Then M is finite by assumption (I), let us denotethe elements of A by A = x1, . . . , xm . Thereby let us assumethat x1, . . . , xk 6∈ B and xk+1, . . . , xm ∈ B for some k ∈ N [notethat for k = 0 this is A ⊆ B and for k = m this is A ∩ B = ∅].In the case k = 0 we may skip steps (2), (3) and let A(k) := A,immediately proceeding with (4).

(2) As A is a `basis of M , it is a minimal `generating subset, by (ix).That is 〈x2, . . . , xm 〉 6= M = 〈B 〉. Hence there is some pointb ∈ B such that b 6∈ 〈x2, . . . , xm 〉 [as else B ⊆ 〈x2, . . . , xm 〉and hence M = 〈B 〉 ⊆ 〈〈x2, . . . , xm 〉 〉 = 〈x2, . . . , xm 〉,according to (ii), a contradiction]. For this b ∈ B let us define

A′ :=(A ∪ b

)\ x1 = b, x2, . . . , xm

364 17 Proofs - Rings and Modules

Page 365: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Then A′ is `independent [as A is `independent, the same is truefor x2, . . . , xm . And due to x2, . . . , xm 0 b (v) implies, thatA′ is independent]. Next we find that A′ ` x1 [because if we hadA′ 0 x1 then - using (v) and the fact that A′ is `independent -A′ ∪ x1 would be `independent, too. However this is untrue,as (A′ ∪ x1 ) \ b = A ` b]. Therefore A′ even is a `basis [asA′ ` x1 we have x1 ∈ 〈A′ 〉 and clearly A \ x1 ⊆ 〈A′ 〉, suchthat A = (A \ x1 ) ∪ x1 ⊆ 〈A′ 〉. Combined with (ii) thisyields M = 〈A 〉 ⊆ 〈〈A′ 〉 〉 = 〈A′ 〉 ⊆ M , that is 〈A 〉 = M andthe `independence of A′ has already been shown].

(3) In (2) we have replaced x1 ∈ A by some b ∈ B to obtain another`basis A′ := (A ∪ b ) \ x1 = b, x2, . . . , xk . That is: ven-turing from A to A′ we have increased the number of elements inA′∩B by 1 (with respect to A∩B). Iterating this process k timeswe establish a `basis A(k) = b1, . . . , bk, xk+1, . . . , xm ⊆ B.

(4) From the construction it is clear, that |A′| ≤ m ≤ n = |B|. Onthe other hand - as B is a `basis, A′ ⊆ B and M = 〈A′ 〉 - item(ix) now implies that A′ = B. And thereby n = |B| = |A′| ≤ m,as well. Together this is m = n as claimed.

(5) It remains to regard case (II), that is m and N are both infinite.As B ⊆ M = 〈A 〉 we obviously have A ` b for any b ∈ B. Thusby property (D3) there is a subset Ab ⊆ A such that #Ab < ∞is finite and Ab ` b already. Now let

A∗ :=⋃b∈B

Ab ⊆ A

Then for any b ∈ B we have Ab ⊆ A∗ and hence A∗ ` b, due to(i). That is b ∈ 〈A∗ 〉 for any b ∈ B and hence B ⊆ 〈A∗ 〉. Thusby (i) and (ii) again we get M = 〈B 〉 ⊆ 〈〈A∗ 〉 〉 = 〈A∗ 〉 ⊆ M ,such that M = 〈A∗ 〉. Now, as A is a `basis, if we had A∗ ⊂ A,then (ix) would yield M 6= 〈A∗ 〉, a contradiction. This is A = A∗

and hence we may compute

|A| =

∣∣∣∣∣ ⋃b∈B

Ab

∣∣∣∣∣ ≤ ∑b∈B|Ab| ≤ |B ×N| = |B|

Hereby |B×N| = |B| holds true, as B was assumed to be infinite.Thus we have found m ≤ n and by interchanging the roles of Aand B we also find n ≤ m, which had to be shown.

(xi) As S is independent, S ⊆ M and M = 〈M 〉 we get from (vii) thatthere is some `basis A of M , with S ⊆ A ⊆ M . Thus by (x) wealready obtain the claim from |S| ≤ |A| = |B|.

(xii) (b) =⇒ (a) has already been shown in (x), thus it only remains toverify (a) =⇒ (b): thus consider any `independent subset S ⊆ Mwith |S| = |B|. As before [S is independent, S ⊆ M and M = 〈M 〉now use (vii)] there is some `basis A of M , with S ⊆ A ⊆ M . Thusby (x) we find |B| = |S| ≤ |A| = |B|. That is |A| = |S| = |B| < ∞which was assumed to be finite. In particular S is a subset of the

365

Page 366: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

finite set A, with the same number of elements #S = #A. Clearlythis means S = A and in particular S is a `basis of M .

2

Proof of (3.38):

(i) Trivially ∅ is R-linearly independent and ∅ also generates 0 , as 0 is the one and only submodule of 0 . Together ∅ is an R-basis of 0 and in particular rank 0 = 0. Conversely if rankM = 0, thenby definition M = 〈∅ 〉m = 0 .

(ii) If M = P + 〈X 〉m then as P ⊆ Q we also get M = Q+ 〈X 〉m.

(iii) First suppose a = aR is principal. If a = 0, then a = 0, which is a freeR-module, with basis ∅. And if a 6= 0, then a is an R-basis of a. Itgenerates a, as 〈 a 〉m = aR = a and it is R-linearly independent, asba = 0 =⇒ b = 0 (as a 6= 0 and R is an integral domain). Converselysuppose a is free, with basis B ⊆ a. If B = ∅, then a = 〈∅ 〉m = 0,which is principal a = 0 = R0. And if we had #B ≥ 2, then we couldchoose some b1, b2 ∈ B. But this would yield b2b1 + (−b1)b2 = 0 (asR is commutative) and hence B would not be R-linearly independent.Thus it only remains #B = 1, say B = a and hence a = 〈B 〉m = aRis principal.

(iv) We have to prove that E is an R-linearly independent set, that alsoR-generates R⊕I . Thus suppose we are given any x = (xi) ∈ R⊕I . Letus denote Ω := i ∈ I | xi 6= 0 , then Ω is finite, by definition of R⊕I .And hence we get x =

∑i∈Ω xiei ∈ 〈E 〉m. Therefore E generates R⊕I

as an R-module. For the R-linear independence we have∑

i∈Ω aiei = 0for some finite subset Ω ⊆ I and some ai ∈ R. We now compare thecoefficients of 0 and

∑i∈Ω aiei. Hereby for any i ∈ Ω the i-th coefficient

of 0 ∈ R⊕I is 0 ∈ R. And the i-th coefficient of∑

i∈Ω aiei is ai. That isai = 0 for any i ∈ Ω and this means that E is R-linearly independent.

2

Proof of (3.39):

(i) (D1): consider an arbitrary subset X ⊆ M and x ∈ X. Then we findX ` x simply by choosing n := 1, x1 := x and a1 := 1, as in this casea1x1 = 1x = x. (D2): consider any two subsets X, Y ⊆ M such thatY ` X and X ` x. Thus by definition there are some n ∈ N, xi ∈ Xand ai ∈ R such that x = a1x1 + · · · + anxn. But as xi ∈ X andY ` X there also are some m(i) ∈ N, yi,j ∈ Y and bi,j ∈ R such thatxi = bi,1yi,1 + · · ·+ bi,m(i)yi,m(i). Combining these equations, we find

x =n∑i=1

aixi =n∑i=1

m(j)∑j=1

aibi,jyi,j

In particular this identity yields Y ` x, which had to be shown. (D3):consider an arbitrary subset X ⊆ M and x ∈ M such that X ` x.

366 17 Proofs - Rings and Modules

Page 367: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Again this means x = a1x1 + · · · + anxn for sufficient xi ∈ X andai ∈ R. Now let X0 := x1, . . . , xn ⊆ X. Then X0 clearly is a finiteset and X0 ` x, by construction. As (D1), (D2) and (D3) are satisfied,we already obtain (iii), (iv) and (v) from (3.35).

(ii) The identity 〈X 〉 = x ∈M | X ` x = lhR(X) is obvious from thedefinitions of the relation ` and the linear hull lhR(X). And the iden-tity lhR(X) = 〈X 〉m has already been shown in (3.11).

(vi) (b) =⇒ (a): we first prove that B is a generating system of M : Thusconsider any x ∈ M , by assumption there is some (xb) ∈ R⊕B suchthat x =

∑b xbb. Now let Ω := b ∈ B | xb 6= 0 . Then Ω ⊆ B is

finite and we further get

x =∑b∈B

xbb =∑b∈Ω

xbb ∈ lhR(B)

As x has been arbitrary that is M ⊆ lhR(B) = 〈B 〉m. Let us nowprove that B also is R-linear independent : thus consider some finitesubset Ω ⊆ B and some ab ∈ R (where b ∈ Ω) such that

∑b∈Ω abb = 0.

Then for any b ∈ B \ Ω we let ab := 0 and thereby obtain∑b∈B

abb =∑b∈Ω

abb = 0 =∑b∈B

0b

Due to the uniqueness of this representation (of 0 ∈M) we find ab = 0for any b ∈ B. And in particular ab = 0 for any b ∈ Ω. By definitionthis means that B truly is R-linearly independent.

(vi) (a) =⇒ (b): Existence: consider any x ∈M , as M = 〈B 〉m = lhR(B)there are a finite subset Ω ⊆ B and ab ∈ R (where b ∈ Ω) such thatx =

∑b∈Ω abb. Now let xb := ab if b ∈ Ω and xb := 0 if b ∈ B \ Ω.

Then (xb) ∈ R⊕B and we also get

x =∑b∈Ω

abb =∑b∈B

xbb

Uniqueness: consider any two (xb), (yb) ∈ R⊕B representing the sameelement

∑b xbb =

∑b ybb. Now let Ω := b ∈ B | xb 6= 0, or yb 6= 0 .

Then Ω ⊆ B is finite and we clearly get

0 =∑b∈B

(xb − yb)b =∑b∈Ω

(xb − yb)b

Yet as B is R-linearly independent, this means xb − yb = 0 for anyb ∈ Ω and hence xb = yb for any b ∈ B (if b 6∈ Ω, then xb = 0 = yb).But this already is the uniqueness of the representation.

2

Proof of (3.40):

(i) By (3.39) ` already satisfies (D1), (D2) and (D3). Thus it only remainsto prove (D4) and we will already do this in the stronger form given

367

Page 368: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

in the claim. That is we consider x ∈ X and X ` y. By definitionof ` this is y = a1x1 + · · · + anxn for some ai ∈ S and xi ∈ X. Ofcourse we may drop any summand with ai = 0 and hence assumeai 6= 0 for any i ∈ 1 . . . n. As y 6= 0 we still have n ≥ 1. Now letX ′ := (X\x )∪ y (that is we have to show X ′ ` x) and distinguishtwo cases: (1) if x 6∈ x1, . . . , xn then x1, . . . , xn ⊆ X\x ⊆ X ′

and in particular X ′ ` x. (2) if x ∈ x1, . . . , xn we may assumex = x1 without loss of generality. Now let bi := −a−1

1 ai ∈ S (forany i ∈ 2 . . . n), then x = x1 = a−1

1 a1x1 = b2x2 + · · · + bnxn. Andas x2, . . . , xn ⊆ X \ x ⊆ X ′ this implies X ′ ` x. Thus wehave proved the claim in both cases. From this property we can easilyderive (D4): suppose x ∈ X, X ` y and X \ x 0 y. Then y 6= 0, aseven ∅ ` 0 and hence X \ x ` y. Thus we may apply this propertyto find X ′ ` x, which had to be shown.

(ii) (b) =⇒ (a): suppose there was some x ∈ X such that X \ x ` x.So by definition there would be some elements xi ∈ X \ x andai ∈ S (where i ∈ 1 . . . n) such that x = a1x1 + · · · + anxn. Now letx0 := x and a0 := −1, then we get xi ∈ X for any i ∈ 1 . . . n and alsoa0x0 + a1x1 + · · · + anxn = 0. But as a0 6= 0 this means that X isS-linear dependent, a contradiction. Thus there is no such x ∈ X andthis is the `independence of X.

(a) =⇒ (b): suppose there were some (pairwise distinct) elementsxi ∈ X and ai ∈ R (where i ∈ 1 . . . n) such that (without loss ofgenerality) a1 6= 0 and a1x1 + · · · + anxn = 0. As a1 6= 0 we may letbi := −a−1

1 ai for i ∈ 2 . . . n. Then x1 = a−11 a1x1 = b2x2 + · · · + bnxn

and hence x2, . . . , xn ` x1. As x2, . . . , xn ⊆ X \ x1 thisyields X \ x1 ` x1, that is X is `dependent, a contradiction. Thusthere are no such xi and ai and this again means, that X is S-linearlyindepenendent.

(iv) As 〈B 〉 = lhS(B) S-generation and `-generation are equivalent no-tions. And in (ii) we have seen that `dependence and S-linear depen-dence are eqivalent, too. In particular we find (a) ⇐⇒ (b) accordingto the respective definitions of S-bases and `bases. And the equiva-lencies (a) ⇐⇒ (c) ⇐⇒ (d) have already been proved in (3.35.(ix)).Finally the we have already shown (b) ⇐⇒ (e) in (3.39.(vi)).

(iii) This is just a reformulation of (3.35.(v)) in the light of (ii). Likewise(v) resp. (vi) is just the content of (3.35.(vii)) resp. (3.35.(x)) usingthe equivalencies (a) ⇐⇒ (b) of (iv). Finally (vii) and (viii) arerepetitions of (3.35.(xi)) and (3.35.(xii)) respectively. All we did wasbringing the definition of the dimension into play.

(ix) If P ≤m M then (according to (v)) we may pick up a basis B ofP . Then by assumption |B| = dim(P ) = dim(V ) and M is finitelygenerated. Now by (viii) we find that B even is an S-basis of M , as|B| = dim(M). In particular that is P = lh(B) = M .

2

Proof of (3.41):

368 17 Proofs - Rings and Modules

Page 369: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) By (3.35.(vii)) and (3.40.(i)) we may choose an S-basis A of P ∩ Q.And by the same theorems we may extend this basis to a basis B ofP , that is A ⊆ B and B is an S-basis of P . Likewise we pick upan S-basis C of Q, that extends A, that is A ⊆ C and C is an S-basis of Q. Let us denote D := B ∪C and choose any p ∈ B ∩C thenp ∈ lh(B)∩ lh(C) = P ∩Q = lh(A). That is there are some (sa) ∈ S⊕Asuch that ∑

a∈Asaa = p =

∑b∈B

δp,bb ∈ lh(B) = P

And as A ⊆ B both sides of the equation are linear expansions of pin terms of the basis B. Due to the uniqueness of the representationthis means p = a (and sa = 1) for some a ∈ A and hence p ∈ A. Thatis we have proved B ∩ C = A. Therefore D is the disjoint union of

D = A ∪ (B \A) ∪ (C \A)

Hence the cardinality of D is the sum of the cardinalities of A, B \Aand C \A respectively. And as A ⊆ B and A ⊆ C this turns into

|D| = |A|+ |B \A|+ |C \A|= |A|+ |B| − |A|+ |C| − |A|= |B|+ |C| − |A|

We will now prove, that D = B ∪ C is an S-basis of P + Q. First ofall it is clear that:

P +Q = ln(B) + lh(C) = lh(B ∪ C) = lh(D)

That is D generates P +Q. Now by definition it remains to prove thelinear independence of D, hence: suppose we have a linear dependenceequation ∑

d∈Dxdd = 0

By the above D is the disjoint union of A, B \A and C \A and hencewe may split this sum into three seperate parts:∑

a∈Axaa+

∑b∈B\A

xbb+∑c∈C\A

xcc = 0

The first way to reassemble this equation is to divide it into a partcontained in P = lh(B) (the left-hand side) and a part contained inQ = lh(C) (the right-hand side)∑

b∈Bxbb = −

∑c∈C\A

xcc

Conseqently both - the left- and right-hand sides - are contained inboth - P and Q. Now, as P ∩ Q = lh(A), there are some (uniquely

369

Page 370: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

determined) (ya) ∈ S⊕A such that∑b∈B

xbb =∑a∈A

yaa

Again we rearrange this equation, splitting it into two parts as a basisrepresentation of 0 within P = lh(C)∑

a∈A(xa − ya)a+

∑b∈B\A

xbb = 0

Now, as B is linearly independent we find xb = 0 for any b ∈ B \ A.By precisely the same arguments we can prove that xc = 0 for anyc ∈ C \A. So there only remains

∑a xaa = 0. Again, as A is linearly

independent, this means xa = 0 for any a ∈ A. Altogether we havexd = 0 for any d ∈ D and this is the linear independence of D. Thatis D is a S-basis of P +Q and therefore we finally got

dim(P +Q) = |D| = |B|+ |C| − |A|= dim(P ) + dim(Q) + dim(P ∩Q)

(ii) As in (i) above, we may choose an S-basis A ⊆ P of P , as S is a skewfield. Again we may extend A to an S-basis B ⊆ V of V . Let us nowdenote B/A := b+ P | b ∈ B \A ⊆ V/P . It is easy to see, thatB/A corresponds to B \A, that is b 7→ b+ P is bijective. Suppose wehad b + P = c + P for some b, c ∈ B \ A. Then c − b ∈ P = lh(A).That is either b = c or b, c ∪ A ⊆ B is S-linearly dependent. Butas B is S-linearly independent (being a basis), the latter is absurd andhence b = c. This is the injectivity and the surjectivity is clear fromthe deinition of B/A.

We now claim that B/A is an S-basis of the quotient V/P . All weneed to do is check that B/A is S-linearly independent and generatesV/P . For the linear independence we suppose we had some xb ∈ Ssuch that

0 =∑

b+P∈B/A

xb(b+ P ) =

∑b+P∈B/A

xbb

+ P =

∑b∈B\A

xbb

+ P

That is∑

b xbb ∈ P = lh(A) and hence there were some xa ∈ S suchthat

∑b xbb =

∑a xaa. Altogether that is

0 =∑b∈B\A

xbb−∑a∈A

xaa

But as B = (B \ A) ∪ A is linearly independent, this can only be ifall the xb ∈ S and all the xa ∈ S are identically 0. And this is whatwas needed for linear inependence of B/A. Also we have to check thatB/A generates V/P . To do this regard any x+P =∈ V/P . As B is abasis of V there are some xb ∈ S (where b ∈ B) such that x =

∑b xbb.

370 17 Proofs - Rings and Modules

Page 371: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Again we use the split B = (B \A) ∪A and A ⊆ P to get

x+ P =

(∑b∈B

xbb

)+ P =

∑b∈B

xb(b+ P )

=∑b∈B\A

xb(b+ P ) =∑

b+P∈B/A

xb(b+ P )

As x has been arbitrary we have V/P = lh(B/A) Then as B/A is alinearly independent generating set of V/P it is a basis. And therefore

dim(V/P

)= |B/A| = |B \A| = |B| − |A| = dim(V )− dim(P )

(iii) The dimension formula now is an easy application of (ii) and the firstisomorphism theorem (3.51.(ii)) which implies, that

V/kn(ϕ)

∼=m im(ϕ)

In particular we have dim(V/kn(ϕ)) = dim(im(ϕ)). But by (ii) wenow know that dim(V/kn(ϕ)) = dim(V ) − dim(kn(ϕ)) such that wetruly find the claim from

dim(V )− dim(kn(ϕ)) = dim(im(ϕ))

2

Proof of (3.31):We first have to prove the equality aR⊕I = a⊕I . The elemets of aR⊕I are ofthe form - where ak ∈ a and bk = (bi,k) ∈ R⊕I

m∑k=1

ak(bi,k) =

(m∑k=1

akbi,k

)∈ a⊕I

That is aR⊕I ⊆ a⊕I , conversely consider any a = (ai) ∈ a⊕I and let usdenote the support of a by I0 := i ∈ I | ai 6= 0 . Note that (by definitionof direct sums) I0 is a finite set. Let us also denote the canonical basis ofR⊕I by ej | j ∈ I , that is ej = (δi,j) ∈ R⊕I . Then we can write

(ai) =∑i∈I0

ai ei ∈ aR⊕I

Altogether we have proved aR⊕I = a⊕I and it remains to prove the isomor-phy of (R/a)⊕I and R⊕I/aR⊕I . To do this regard the R-linear map

ϕ : R⊕I →(R/

a

)⊕I: (bi) 7→ (bi + a)

Clearly ϕ is surjective, and its kernel is given to be knϕ = aI ∩ R⊕I = a⊕I

such that knϕ = aR⊕I . So by the first isomorphism theorem, this induces

371

Page 372: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

an isomorphism of R-modules

Φ : R⊕I/

aR⊕I∼→(R/

a

)⊕I: (bi) + aR⊕I 7→ (bi + a)

But both R⊕I/aR⊕I and (R/a)⊕I are (R/a)-modules and from the explictconstruction of Φ it is clear, that Φ even is an (R/a)-module-homomorphism.

2

Proof of (2.5):

• (a) =⇒ (c)If a i R/m is an ideal, then by the correspondence theorem it is ofthe form a = a/m for some ideal a i R with m ⊆ a. But as m ismaximal this means a = m or a = R. Thus we have a = 0 + m (inthe case a = m) or a = R/m (in the case a = R).

• (c) =⇒ (b)R/m is a non-zero (since m 6= R) commutative ring (since R is such).But it has already been shown that a commutative ring is a field ifand only if it only contains the trivial ideals (viz. seciton 1.6).

• (b) =⇒ (d)Let a 6∈ m, this means a + m 6= 0 + m. And as R/m is a field there issome b ∈ R such that b+ m is inverse to a+ m. That is

ab+ m = (a+ m)(b+ m) = 1 + m

Therefore there is some m ∈ m such that 1 = ab + m ∈ aR + m andhence we have truly obtained aR+ m = R.

• (d) =⇒ (a)Suppose a i R is some ideal with m ⊆ a. We want to show that mis maximal hence if a = m then we are done. Else there is some a ∈ awith a 6∈ m. Hence we get aR + m = R. But as a ∈ a and m ⊆ a weget R = aR+ m ⊆ a ⊆ R.

2

Proof of (2.6):

• ¬ (a) =⇒ ¬ (b): by assumption ¬ (a) there is some maximal idealm i R such that j 6∈ m. Hence we have m + jR = R such that isthere is some b ∈ R (let a := −b) such that 1− aj = 1 + bj = 1 ∈ R∗.

• ¬ (b) =⇒ ¬ (a): by assumption ¬ (b) there is some a ∈ R such that1−aj 6∈ R∗. That is (1−aj)R 6= R is a non-full ideal and hence thereis a maximal ideal m i R containing it 1− aj ∈ (1− aj)R ⊆ m by(2.4.(ii)). If we now had j ∈ jacR ⊆ m then also aj ∈ m and hecnewe would arrive at the contradiction

1 = (1− aj) + aj ∈ m

372 17 Proofs - Rings and Modules

Page 373: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• (a) =⇒ (b): if j ∈ a ⊆ jacR then we let a := −1 and as we havejust proved above this yields 1 + j = 1− aj ∈ R∗ and hence (b).

• (b) =⇒ (c): first of all we have 1 ∈ 1 + a since 0 ∈ a. Now considerany a = 1 + j and b = 1 + k ∈ 1 + a, that is j and k ∈ a. Thenab = 1 + (j + k + jk) ∈ 1 + a again, as j + k + jk ∈ a. Further we get

(1 + j)(1− (1 + j)−1j) = (1 + j)− j = 1

which implies a−1 = (1 + j)−1 = 1 − (1 + j)−1j. And hence we alsohave a−1 ∈ 1 + a, since −(1 + j)−1j ∈ a due to j ∈ a.

• (c) =⇒ (a): fix j ∈ a and consider any a ∈ R. Then we also get−aj ∈ a and hence 1 − aj ∈ 1 + a ⊆ R∗ by assumption. But as wehave alredy proved above 1− aj ∈ R∗ for any a ∈ R means j ∈ jacR.

2

Proof of (2.9):

• (a) =⇒ (b): consider a, b ∈ R such that 0+p = (a+p)(b+p) = ab+p.That is ab ∈ p and as p is assumed to be prime this implies a ∈ p orb ∈ p. This again means that a+ p = 0 + p or b+ p = 0 + p and henceR/p is an integral domain. But R/p 6= 0 is clear, since p 6= R andhence 1 + p 6= 0 + p.

• (b) =⇒ (c): since R/p 6= 0 is non-zero we have 1 + p 6= 0 + p or inother words 1 6∈ p which already is (1). Now let u, v ∈ R \ p, thenu+ p 6= 0 + p and v + p 6= 0 + p. But as /p is an integral domain thisthen yields uv+ p = (u+ p)(v+ p) 6= 0 + p. Thus we also have uv 6∈ pwhich is (2).

• (c) =⇒ (a): since 1 ∈ R \ p we have p 6= R. Now consider a, b ∈ Rwith ab ∈ p. Supposed we had a 6∈ p and b 6∈ p then a, b ∈ R \ p andhence ab ∈ R \ p, a contradiction. Thus we truly have a ∈ p or b ∈ p.

• (a) =⇒ (d): consider two ideals a, b i R such that ab ⊆ p. Supposeneither a ⊆ p nor b ⊆ p was true. That is there would be a ∈ a andb ∈ b with a, b 6∈ p. Then ab ∈ ab ⊆ p. But as p has is prime thisyields a ∈ p or b ∈ p, a contradiciton.

• (d) =⇒ (a): consider a, b ∈ R with ab ∈ p. Then we let a := aR andb := bR. Then ab = abR ⊆ p by (1.54), hence we find a ∈ aR = a ⊆ por b ∈ bR = b ⊆ p by assumption. As we have also assumes p 6= R,this means that p is prime.

2

Proof of (2.7):Let i 6= j ∈ 1 . . . k, then mi ⊆ mj would imply mi = mj (since mi ismaximal and mj 6= R). Hence there is some ai ∈ mi with ai 6∈ mj . And aswe have just seen in (2.5) this means aiR+ mj = R. But as ai ∈ mi we findR = aiR + mj ⊆ mi + mj ⊆ R. Hence the mi are pairwise coprime. Andthe second claim follows from this, as we have proved in (1.56.(iv)).

373

Page 374: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Hence it remains to prove the third statement, i.e. the strict desent ofthe chain m1 ⊃ m1m2 ⊃ . . . ⊃ m1 . . .mk. To do this we use induction onk (the foundation k = 1 is trivial), and let a := m1 . . .mk and m := mk+1.For any i ∈ 1 . . . k there are ai ∈ mi such that ai 6∈ m again. Now leta := a1 . . . ak ∈ a. Then a 6∈ m, as maximal ideals are prime (see the remarkto (2.9) or (2.19) for a proof) and hence a = a1 . . . ak ∈ m would implyai ∈ m for some i ∈ 1 . . . k. In particular a 6∈ am ⊆ m, that is we have foundthe induction step

a ∈ (m1 . . .mk) \ (m1 . . .mkmk+1)

2

Proof of (2.11):

(i) We will prove the statement by induction on k - the case k = 1 beingtrivial. So let now k ≥ 2 and let a := a1 . . . ak−1 and b := ak. Thenab = a1 . . . ak−1ak ∈ p, but as p is prime this implies a ∈ p or b ∈ p. Ifb ∈ p then we are done (with i := k). If not then a = a1 . . . ak−1 ∈ p,so by induction hypothesis we get ai ∈ p for some i ∈ 1 . . . k − 1.

(ii) Analogous to (i) we prove this statement by induction on k - the casek = 1 being trivial again. For k ≥ 2 we likewise let a := a1 . . . ak−1

and b := ak. Then ab ⊆ p implies a ⊆ p or b ⊆ p due to (2.9). Ifb ⊆ p then we are done with i := k. Else a ⊆ p and hence ai ⊆ p forsome i ∈ 1 . . . k − 1 by induction hypothesis.

(iii) Let us choose a subset I ⊆ 1 . . . n such that a is contained in the unionof the bi (i ∈ I) with a minimal number of elements. That is

a ⊆⋃i∈I

bi

∀ J ⊆ 1 . . . n : a ⊆⋃i∈J

bi =⇒ #I ≤ #J

It is clear that this can be done, since J = 1 . . . n suffices to covera and the number of elements is a total order and hence allows topick a minimal element. In the following we will prove that I containsprecisely one element. And in this case I = i we clearly have a ⊆ bias claimed. Thus let us assume #I ≥ 2 and derive a contradiction:consider some fixed j ∈ I, as I has minimally many elements we have

a ⊆/⋃

j 6=i∈Ibi

That is there is some aj ∈ a such that aj 6∈ bi for any j 6= i ∈ I. Inparticular we find aj ∈ bj , as the bi (i ∈ I) cover a. Let us pick onesuch aj for every j ∈ I. If #I = 2 then we may assume I = 1, 2 without loss of generality. As a1 and a2 ∈ a we also have a1 + a2 ∈ a.Suppose we had a1 + a2 ∈ b1 then a2 = (a1 + a2) + (−a1) ∈ b1 aswell, a contradiction. Likewise we find that a1 + a2 6∈ b2. But nowa1 + a2 ∈ a ⊆ b1 ∪ b2 provides a contradiction. Hence #I = 2

374 17 Proofs - Rings and Modules

Page 375: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

cannot be. Now assume #I ≥ 3, then by assumption there is somek ∈ I such that bk is prime. Without loss of generality (that is byrenumbering) we may assume I = 1 . . .m and b1 to be prime. As anyai ∈ a we also have a1 + a2 . . . am ∈ a. Suppose a1 + a2 . . . am ∈ bifor some i ∈ 2 . . .m, then a1 = (a1 + a2 . . . am) + (−a2 . . . am) ∈ bitoo, a contradiction. And if we suppose a1 + a2 . . . am ∈ b1 thena2 . . . am = (a1 + a2 . . . am) + (−a1) ∈ b1 too. As b1 is prime we findai ∈ b1 for some i ∈ 2 . . .m due to (i). But this is a contradiction,altogether a1 + a2 . . . am is contained in a but none of the bi for i ∈ I,a contradiction. Hence the assumption #I ≥ 3 has to be abandoned,which only leaves #I = 1.

2

Proof of (2.13):

• We will first show the well-definedness of spec (ϕ) - it is clear thatϕ−1(q) = a ∈ R | ϕ(a) ∈ q is an ideal in R. Thus it only remainsto show that ϕ−1(q) ∈ X truly is prime. First of all ϕ−1(q) 6= R aselse 1 ∈ ϕ−1(q) and this would mean 1 = ϕ(1) ∈ q. But this is absurd,as q is prime and hence q 6= S. Hence we consider ab ∈ ϕ−1(q),i.e. ϕ(ab) = ϕ(a)ϕ(b) ∈ q. As q ∈ Y is prime, this yields ϕ(a) ∈ q orϕ(b) ∈ q, that is a ∈ ϕ−1(q) or b ∈ ϕ−1(q) again.

• Next we will show that for any a ∈ R we have (specϕ)−1(Xa) = Yϕ(a).But this is immediate from elementary set theory, since

(specϕ)−1(Xa) =q ∈ Y | a 6∈ ϕ−1(q)

= Yϕ(a)

• Likewise (specϕ)(V(b)) ⊆ V(ϕ−1(b)) is completely obvious, once wetranslate these sets back into set-theory, then we have to show

ϕ−1(q) | b ⊆ q ∈ Y⊆

p ∈ X | ϕ−1(b) ⊆ p

Thus we have to show that b ⊆ q ∈ Y implies ϕ−1(b) ⊆ ϕ−1(q) ∈ X.But b ⊆ q clearly implies ϕ−1(b) ⊆ ϕ−1(q) and we have just proved,that ϕ−1(q) ∈ X as well.

• Finally it remains to prove (specϕ)−1(V(a)) = V(〈ϕ(a) 〉i). So let usfirst translate the claim back into elementary set-theory again

q ∈ Y | a ⊆ ϕ−1(q)

= q ∈ Y | ϕ(a) ⊆ q

But by definition we have ϕ−1(q) = a ∈ R | ϕ(a) ∈ q , and hencethe inclusion a ⊆ ϕ−1(q) trivially implies ϕ(a) ⊆ q and vice versa.

2

Proof of (2.14):

(i) First of all a :=⋂P is an ideal of R, due to (1.47). And if we pick

any p ∈ P then a ⊆ p. As 1 6∈ p we also have 1 6∈ a and hence a 6= R.Now let

U := R \ a =⋃R \ p | p ∈ P

375

Page 376: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Then 1 ∈ U as we have just seen. And for any u, v ∈ U there are p,q ∈ P such that u ∈ R \ p and v ∈ R \ q. But as P is a chain wemay assume p ⊆ q and hence v ∈ R \ q ⊆ R \ p, as well. As p is aprime ideal R \ p is multimplicatively closed, so u, v ∈ R \ p impliesuv ∈ R \ p and hence uv ∈ U . That is U is multiplicatively closed andhence a is a prime ideal.

(ii) Clearly P is partially ordered under the inverse inclusion relation ” ⊇ ”(as P 6= ∅). Now let O ⊆ P be a chain in P, then by (i) we know thatp :=

⋂O i R is prime. Now pick any q ∈ O, then as p ⊆ q and the

assumption on P we find p ∈ P. Hence p is a ⊇ -upper bound of O.And by the lemma of Zorn this yields that P contains a ⊇ -maximalelement p∗. But clearly ⊇ -maximal is ⊆ -minimal and hence p∗ ∈ P∗.

(iii) Let us take Z := p ∈ specR | condition(p) , then Z is partially or-dered under the inverse inclusion ” ⊇ ” of sets. In any case Z 6= ∅ isnon-empty: if we imosed none then by assumption R 6= 0 and henceR has a maximal (in particular prime) ideal due to (2.4.(iv)). If weimposed p ⊆ q then q ∈ Z. If we imposed a ⊆ p then by assump-tion a 6= R and hence there is a maximal (in particular prime) idealm containing a ⊆ m due to (2.4.(ii)) again. Finally if we supposeda ⊆ p ⊆ q then we assumed a ⊆ q and hence q ∈ Z. Now let P ⊆ Zbe any chain in Z and p :=

⋂P. Then by (i) p is a prime ideal of R

again. Also a ⊆ p and p ⊆ q are clear (if they have been imposed).Hence p ∈ Z is an upper bound of P (under ⊇ ). And hence thereis a maximal element p∗ ∈ Z∗ by the lemma of Zorn. But as p∗ ismaximal with respect to ⊇ it is minimal with respect to ⊆ .

(iv) By assumption a ⊆ q and (iii) there is some prime ideal p∗ minimalamong the ideals p∗ ∈ p ∈ specR | a ⊆ p ⊆ q ∗ and in particulara ⊆ p∗ ⊆ q. It remains to prove, that p∗ also is a minimal elementof p ∈ specR | a ⊆ p . Thus suppose we are given any prime idealp with a ⊆ p and p ⊆ p∗. Then in particular p ⊆ q and by theminimality of p∗ this then implies p = p∗. Thus p∗ even is minimal inthis larger set.

(v) Let us take Z :=b i R | a ⊆ b ⊆ R \ U

, then Z is partially

ordered under ”⊆ ”. Clearly Z 6= ∅ is non-empty, since a ∈ Z, aswe have assumed a ∩ U = ∅. Now let B ⊆ Z be any chain in Z,then c :=

⋃B is an ideal, due to (2.4). And if we pick any b ∈ B

then a ⊆ b ⊆ c and hence a ⊆ c. And if b ∈ c then there is someb ∈ B ⊆ Z such that b ∈ b ⊆ R \ U . Hence we also find c ⊆ R \ Usuch that c ∈ Z again. That is we have found an upper bound c of B.Hence there is a maximal element b

∗ ∈ Z by the lemma of Zorn. Nowlet p ∈ Z∗ be any maximal element of Z. As p ⊆ R \ U and 1 ∈ Uwe have 1 6∈ p such that p 6= R is non-full. Now consider any a, b ∈ Rsuch that ab ∈ p but suppose a 6∈ p and b 6∈ p. As we have p ⊂ p+aRand p ∈ Z∗ is maximal we have (p + aR) ⊆/ R \ U . That is we maychoose u ∈ (p+aR)∩U . That is u = p+αa for some p ∈ p and α ∈ R.Analogously we can find some v = q + βb ∈ (p + bR) ∩ U with q ∈ pand β ∈ R. Thereby we get

uv = (p+ αa)(q + βb) = pq + αaq + βbp+ αβab ∈ p

376 17 Proofs - Rings and Modules

Page 377: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

since p, q and ab ∈ p. But as U is multiplicatively closed and u, v ∈ Uwe also have uv ∈ U . That is uv ∈ p ∩ U , a contradiction to p ∈ Z.Hence we have a ∈ p or b ∈ p which means that p is prime.

2

Proof of (2.17):

(i) First of all a ⊆ a : b, because if a ∈ a then also ab ∈ a, as a is anideal. Next we will prove that a : b i R is an ideal of R. It is clearthat 0 ∈ a : b since 0b = 0 ∈ a. Let now p and q ∈ a : b, that is pband qb ∈ a. Then (p + q)b = pb + qb ∈ a, (−p)b = (−pb) ∈ a and forany a ∈ R we also get (ap)b = a(pb) ∈ a which means p + q, −p andap ∈ a : b respectively.

(ii) First of all a ⊆√a, because if a ∈ a then for k = 1 we get ak = a ∈ a.

Next we will prove that√a iR is an ideal. It is clear that 0 ∈ a ⊆

√a.

Now let a and b ∈√a that is ak ∈ a and bl ∈ a for some k, l ∈ N.

Then we get

(−a)k = (−1)kak

(ab)l = albl

(a+ b)k+l =

k+l∑i=0

(k + l

i

)aibk+l−i

=

k∑i=0

(k + l

i

)aibk+l−i +

k+l∑i=k+1

(k + l

i

)aibk+l−i

=k∑i=0

(k + l

i

)aibk+l−i +

l∑j=1

(k + l

k + j

)ak+jbl−j

=

(k∑i=0

(k + l

i

)aibk−i

)bl +

l∑j=1

(k + l

k + j

)ajbl−j

ak

As all these elements are contained in a again, we again found a + b,−a and ab ∈

√a. Thus it remains to prove that

√a is a radical ideal.

Thus let a ∈ R be contained in the radical of√a, that is ak ∈

√a for

some k ∈ N. And hence there is some l ∈ N such that akl = (ak)l ∈ a.But this already means a ∈

√a and the converse inclusion is clear.

(iii) If a ∈ a : b then ab ∈ a ⊆ b and hence a ∈ b : b again. Likewise if weare given a ∈

√a then there is some k ∈ N such that ak ∈ a ⊆ b and

hence a ∈√b already.

(iv) If 1 ∈ R = a : b then b = 1b ∈ a. And if b ∈ a then for any a ∈ Rwe have ab ∈ a, since a is an ideal. But this also means a : b = R.Now let p be a prime ideal of R, if b ∈ p then we have alredy seenp : b = R. Thus assume b 6∈ p. Then a ∈ p : b is equivalent to ab ∈ pfor any a ∈ R. But as p is prime this implies a ∈ p or b ∈ p. Thelatter is untrue by assumption so we get a ∈ p. That is we have provedp : b ⊆ p and the converse inclusion has been proved in (i) generally.

377

Page 378: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) If ∅ 6= A ⊆ sradR ⊆ idealR, then the intersection⋂A i R already

is an ideal of R due to (1.47). Thus it remains to show that⋂A is

radical. Let ak ∈⋂A, that is ak ∈ a for any a ∈ A. But as a is radical

we find that a ∈ a. As this is true for any a ∈ A we found a ∈⋂A.

(v) Consider any a ∈ R, then a is contained in the intersection of all ai : biff a ∈ ai : b for any i ∈ I. And this again is equivalent to ab ∈ ai forany i ∈ I. Thus we have already found the equivalence

a ∈⋂i∈I

(ai : b) ⇐⇒ ab ∈⋂i∈I

ai ⇐⇒ a ∈

(⋂i∈I

ai

): b

2

Proof of (2.18):(c) ⇐⇒ (a) is trivial and so is (a) =⇒ (b). For the converse implicationit suffices to remark, that a ⊆

√a is true for any ideal a i R (as a ∈ a

implies a = a1 ∈ a). Thus we prove (a) =⇒ (d): Let b + a be a nilpotentof R/a, that is bk + a = (b + a)k = 0 + a for some k ∈ N. This meansbk ∈ a and hence b ∈

√a ⊆ a by assumption. And from b ∈ a we find

b+ a = 0 + a. Conversely (d) =⇒ (c): consider any b ∈ R such that bk ∈ a.Then (b + a)k = bk + a = 0 + a and hence b + a is a nilpotent of R/a. Byassumption this means b+ a = 0 + a or in other words b ∈ a.

2

Proof of (2.19):The equivalencies of the respecive properties of the ideal and its quotioentring have been shown in (2.5), (2.9) and (2.18) respectively. Hence if m i Ris maximal then R/m is a field. In particular R/m is a non-zero integraldomain which means that m is prime. And if p i R is prime, then R/p isan integral domain and hence reduced which again means that p is a radicalideal. Yet we also wish to present a direct proof of this:

• m maximal =⇒ m prime: consider a, b ∈ m with ab ∈ m, but supposea 6∈ m and b 6∈ m. This means m ⊂ m+aR and hence m+aR = R, asm is maximal. Hence there are m ∈ m and α ∈ R such that m+αa = 1.Likeweise there are n ∈ m and β ∈ R such that n+ βb = 1. Thereby

1 = (m+ αa)(n+ βb) = mn+ αan+ βbm+ αβab

But as m, n and ab ∈ m we hence found 1 ∈ m which means m = R.But this contradicts m being maximal.

• p prime =⇒ p radical: consider a ∈ R with ak ∈ p. As p is primewe have k ≥ 0 (as else 1 = a0 ∈ p such that p = R). But p is primeand hence ak ∈ p for some k ≥ 1 implies a ∈ p due to (2.11.(i)). Thismeans that p is radical, due to (2.18.(c)).

2

Proof of (2.20):

378 17 Proofs - Rings and Modules

Page 379: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) We have to prove√a =

⋂ p | a ⊆ p . Thereby the inclusion ”⊆ ”

is clear, as any p contains a. For the converse inclusion we are givensome a ∈ R such that a ∈ p for any prime ideal p i R with a ⊆ p.Now let U :=

1, a, a2, . . .

, then it is clear that U is multiplicatively

closed. Suppose a∩U = ∅, then by (2.14.(iv)) there is an ideal b i Rmaximal with a ⊆ b ⊆ R \ U . And this ideal b is prime. But asb ∩ U = ∅ and a ∈ U we have a 6∈ b even though b is a prime idealwith a ⊆ b. A contradiction. Thus we have a ∩ U 6= ∅, that is thereis some b ∈ U such that b ∈ a. By construction of U b is of the formb = ak for some k ∈ N. And this means a ∈

√a.

(i) Let us denote V(a) := p ∈ specR | a ⊆ p and M := V(a)∗. Thenin (i) above we have just seen the identity

√a =

⋂V(a)

We now have to prove⋂V(a) =

⋂M. As M ⊆ V(a) the inclusion

”⊆ ” is clear. For the converse inclusion we are given any a ∈⋂M

and any q ∈ V(a) and need to show a ∈ q. But as a ⊆ q by (i) there issome p∗ ∈M such that a ⊆ p∗ ⊆ p. And as a ∈

⋂M we find a ∈ p∗

and hence a ∈ q. Thus we have also established inclusion ” ⊇ ”.

(ii) By definition it is clear that nilR =√

0 is the radical of the zero-ideal.And the further equalities given are immediate from (ii) above.

(iii) Let a ∈ R be contained in the radical of√a, that is ak ∈

√a for some

k ∈ N. And hence there is some l ∈ N such that akl = (ak)l ∈ a. Butthis already means a ∈

√a and the converse inclusion is clear.

(v) As a b ⊆ a ∩ b we have√a b ⊆

√a ∩ b by (i). Next let a ∈

√a ∩ b,

that is there is some k ∈ N such that ak ∈ a ∩ b. Now ak ∈ a andak ∈ b implies a ∈

√a∩√b. Finally consider some a ∈

√a∩√b, that is

ai ∈ a and aj ∈ b for some i, j ∈ N. Then ai+j = aiaj ∈ a b and hencea ∈√a b. Altogether we have proved the following chain of inclusions√

a b ⊆√

a ∩ b ⊆√a ∩

√b ⊆

√a b

(vi) By induction on (v) we know that√ak =

√a∩· · ·∩

√a (k-times). And

this obviously equals√a such that we get the equality claimed.

(iv) In a first step let us assume k = 1, that is a ⊆ p ⊆√a, then in

particular we have a ⊆ p and hence√a ⊆

√p. Yet as p is prime we

also have√p = p due to (2.19). Thereby we get

√a ⊆ p ⊆

√a by

assumption. Now consider an arbitrary k ∈ N. Then by (vi) we have

ak ⊆ p ⊆√a =√ak. Thus by the case k = 1 (using ak instead of a)

we get p =√ak =

√a, the latter by (vi) again.

(vii) By assumption a is finitely generated, that is there are ai ∈ R suchthat a = 〈a1, . . . , an 〉i. And as a is contained in the radical of b we

find that for any i ∈ 1 . . . n there is some k(i) ∈ N such that ak(i)i ∈ b.

379

Page 380: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now let k := k(1) + · · ·+ k(n) and consider f1, . . . , fk ∈ a. That is

fj =

n∑i=1

fi,jai

for some fi,j ∈ R. Using (1.37) we now extract the product of the fj

k∏j=1

fj =

k∏j=1

n∑i=1

fi,jai =∑i∈I

k∏j=1

fij ,jaij

where I = (1 . . . n)k and i = (i1, . . . , ik) ∈ I. For any h ∈ 1 . . . n wenow letm(h) := # j ∈ 1 . . . k | ij = h the number of times h appearsin the collection of ij . Then it is clear that m(1) + · · ·+m(n) = k =k(1) + · · · + k(n) Hence there has to be some index h ∈ 1 . . . n such

that k(h) ≤ m(h). Thereby am(h)h divides a

k(h)h ∈ b such that

bi :=

k∏j=1

aij =

n∏h=1

am(h)h ∈ b

This now can be used to show f1 . . . fk ∈ b. Simply rearrange to find

k∏j=1

fj =∑i∈I

k∏j=1

fij ,jaij =∑i∈I

k∏j=1

fij ,j

bi ∈ b

Now recall that - by definition - ak consists precisely of sums of ele-ments of the form f1 . . . fk where fj ∈ a. As we have seen any suchelement is contained in b and hence ak ⊆ b.

(viii) If a is contained in the radical of the intersection of the ai then thereis some k ∈ N such that a is contained in the intersection of the ai.That is ak ∈ ai for any i ∈ I and hence a ∈

√ai for any i ∈ I.

(viii) Let a be contained in the sum of the radicals√ai. That is a is a finite

sum of the form a = a1 + + . . . an where aj is contained in the radicalof ai(j). Thereby for any j ∈ 1 . . . n we get

aj ∈√

ai(j) =⇒ ∃ k(j) ∈ N : ak(j)j ∈ ai(j)

Now let k := k(1) + · · ·+k(n) ∈ N, then by the polynomial rule (1.37)

ak =∑|α|=k

n∏j=1

aα(j)j

Suppose that for any j ∈ 1 . . . n we had α(j) = k(j), then |α| =α(1) + · · · + α(n) < k(1) + · · · + k(n) = k, a contradiction. Hencethere is some j ∈ 1 . . . n such that k(j) ≤ α(j). And for this j we have

aα(j)j ∈ ai(j). Thus ak is contained in the radical of ai(1) + · · · + ai(n)

and in particualr in the radical of the sum of all ai.

380 17 Proofs - Rings and Modules

Page 381: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (2.23):Fix any commutative ring (E,+, ·) and consider the polynomial ring S :=E[ti | 1 ≤ i ∈ N] in countable infinitely many variables over E. Then wetake to the quotient where for any 1 ≤ i ∈ N we have tii = 0. Formally

R := S/v where v := 〈tii | 1 ≤ i ∈ N 〉i

For any f ∈ S let us denote its residue class in R by f := f+v. And furtherlet us define the size of f to be the maximum index i such that ti appearsamong the variable symbols of f . Formally that is

size(f) := max 1 ≤ i ∈ N | ∃α : f [α] 6= 0, αi 6= 0

Note that this truly is finite, as there only are finitely many α such thatf [α] 6= 0 and for any α = (αi) there also are finitely many i only, suchthat αi 6= 0. As our exemplary ideal let us take the zero-ideal a := 0. Byconstruction we have (ti)

i = 0 and hence ti ∈√

0. On the other hand wehave (for any n ∈ N with n < i)

(ti)n ∈ (

√0)n but (ti)

n 6= 0

Thus for any n ∈ N let us just take some i ∈ N with n < i. Then tidemonstrates that (

√0)n is not contained in 0. Also

√0 is not finitely

generated. Because if we consider a finite collection of elements f1, . . . , fk ∈√0 then just take i > max size(fj) | j ∈ 1 . . . k . Then it is clear that

fj ∈ E[t1, . . . , ti−1] and hence ti 6∈ 〈f1, . . . , fk 〉i. And thereby

ti 6∈ 〈f1, . . . , fk 〉i/v = 〈f1, . . . , fk 〉i

2

Proof of (2.26):

(i) Consider any a ∈ R, then a ∈√b ∩ R is equivalent, to ϕ(a) ∈

√b.

That is iff there is some k ∈ N such that ϕ(ak) = ϕ(a)k ∈ b. This

again is ak ∈ b ∩R, which is equivalent to a ∈√b ∩R.

(ii) In order to prove√aS ⊆

√aS we have to verify ϕ(

√a) ⊆

√aS. Thus

consider any a ∈√a, that is ak ∈ a for some k ∈ N. But from this

we get ϕ(a)k = ϕ(ak) ∈ ϕ(a) ⊆ aS. And this again is ϕ(a) ∈√aS,

which had to be shown. Next, as a ⊆√a we clearly have aS ⊆

√aS

which gives rise to the incusion

√aS ⊆

√√aS

Conversely consider any f ∈ S such that fk ∈√aS for some k ∈ N.

That is there are gi ∈ S and ai ∈√a such that

fk = g1ϕ(a1) + · · ·+ gnϕ(an)

381

Page 382: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

As ai is contained in√a there is some k(i) ∈ N such that a

k(i)i ∈ a.

Now let us abbreviate bi := ϕ(ai), l := k(1) + · · ·+k(n) and m := k · l.Then we get

fm = (g1b1 + . . . gnbn)l =∑|α|=l

(l

α

)(g1b1)α(1) . . . (gnbn)α(n)

Now fix any α and suppose for any i ∈ 1 . . . n we had α(i) < k(i),then we had l = |α| = α(1) + · · · + α(n) < k(1) + · · · + k(n) = l anobvious contradiction. Hence for any α there is some j ∈ 1 . . . n suchthat α(j) ≥ k(j). And for this j we get

bα(j)j = b

α(j)−k(j)j b

k(j)j = b

α(j)−k(j)j ϕ

(ak(j)j

)∈ aS

But as this holds true for any α with |α| = l we find that fm ∈ aS.And this again means that f is contained in the radical of aS.

(iii) The equivalence b is a ? iff ϕ−1(b) is a ? has already been proved inthe correspondence theorem (1.61). Thus from now on ϕ is surjective,then ϕ(a) is a ideal of S due to (1.71). Now compute

ϕ−1ϕ(a) = b ∈ R | ϕ(b) ∈ ϕ(a) = b ∈ R | ∃ a ∈ a : ϕ(b) = ϕ(a) = b ∈ R | ∃ a ∈ a : b− a ∈ kn(ϕ) = b ∈ R | ∃ a ∈ a : b ∈ a+ kn(ϕ) = a+ kn(ϕ) | a ∈ a = a + kn(ϕ)

Now by the first claim b := ϕ(a) is a ? iff ϕ−1(b) = ϕ−1ϕ(a) = a+kn(ϕ)is a ?. And this already is the second claim.

2

Proof of (3.67):

• We will first prove the implication (a) =⇒ (b): that is we regarda non-empty family P ⊆ subm (M) of sumbodules of M . We willgive a proof by contradiction, that is we suppose P had no maximalelement. That is for any P ∈ P there is some Q ∈ P with P ⊆ Q butP 6= Q. As P is non-empty we can start with some P0 ∈ P. Supposewe have already found P0 ⊂ P1 ⊂ . . . ⊂ Pk where Pi ∈ P. As P hasno maximal elements Pk is not maximal, that is Pk ⊂ Pk+1 for somePk+1 ∈ P. Continuing indefinitely we would find a non-stationarychain P0 ⊂ P1 ⊂ . . . ⊂ Pk ⊂ Pk+1 ⊂ . . . in contradiction to (a).

• Next let us prove (b) =⇒ (c): that is we are given any submoduleQ ≤m M of M . To do this let us regard the set of finitly generatedsubmodules of Q

P := P ≤m Q | P finitely generated

382 17 Proofs - Rings and Modules

Page 383: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Clearly 0 ∈ P, such that P is non-empty. By assumption (b) containsa maximal element P ∗ ∈ P. Now for any x ∈ Q it is clear thatP := P ∗ + lh(x) is another finitely generated submodule of Q [clearlyif P ∗ = lh(x1, . . . , xn) then P = lh(x1, . . . , xn, x)]. That is P ∈ Pagain and P ∗ ⊆ P . By maximality of P ∗ this is P ∗ = P and hencex ∈ P = P ∗. As x ∈ Q has been chosen arbitarily this means Q ⊆ P ∗

and hence Q = P ∗. In particular Q ∈ P is finitely generated itself.

• Let us first close the circle by proving (c) =⇒ (a): that is we regardsome ascending chain P0 ⊆ P1 ⊆ P2 ⊆ . . . of submodules Pk ≤m Mof M . As the Pk for a chain we know by (3.19.(v)) that the union

P :=⋃k∈N

Pk ≤m M

is a submodule of M again. And by assumption (c) it is finitely gen-erated, that is P = lh(x1, . . . , xr) for some xi ∈ P . That is for anyi ∈ 1 . . . r there is some k(i) ∈ N such that xi ∈ Pk(i) now take s :=max i(1), . . . , i(r) ∈ N. Then - as the Pk for an ascending chain -we have xi ∈ Pk(i) ⊆ Ps again. In particular P = lh(x1, . . . , xr) ⊆ Pssuch that for any j ∈ N we have P ⊆ Ps ⊆ Ps+j ⊆ P which impliesPs+j = Ps. This means that the chain (Pk) becomes stationary at theindex s, which proves (a).

2

Proof of (3.71):

(i) If M is ? then P clearly is ? as well, as any chain of submodules ofP also is a chain of submodules of M . Also M/P is ? again: bythe corrspondence theorem (3.14.(ii)) any submodule of M/P is ofthe form Q/P for some submodule Q ≤m M . Hence any chain ofsubmodules of M/P corresponds to a chain of submodules in M andhence stabilizes.

Conversely suppose both P and M/P are noetherian, then we have toprove that M is noetherian. To do this we regard an ascending chainof submodules

Q0 ⊆ Q1 ⊆ . . . ⊆ Qk ⊆ Qk+1 ⊆ . . .

Then Q0 ∩ P ⊆ Q1 ∩ P ⊆ Q2 ∩ P ⊆ . . . is an ascending chain ofsubmodules. As P is noetherian it stabilizes, that is there is somer ∈ N such that for any i ∈ N we have Qr+i ∩ P = Qr ∩ P . Also weobtain an ascending chain of submodules of M/P by

Q0 + P/P ⊆ Q1 + P/

P ⊆ Q2 + P/P ⊆ . . .

As M/P is noetherian as well this chain stabilizes, too, that is there issome t ∈ N such that for all i ∈ N we have (Qt+i+P )/P = (Qt+P )/P .By the correspondense theorem this already is Qt+i + P = Qt + P .Now choose s := max (r, t), then for any i ∈ N we have

Qs+i ∩ P = Qs ∩ P and Qs+i + P = Qs + P

383

Page 384: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now by the modular rule (3.17) this implies Qs+i = Qs which meansthat M is noetherian, as well.

In case that both P and M/P are artinian, we regard a descendingchain Q0 ⊇ Q1 ⊇ Q2 ⊇ . . . and use the same arguments as aboveto show that it stabilizes, as well. All we have to do is replace any”⊆ ” by ” ⊇ ”.

(iv) Clearly M is isomorphic to the submodule M ⊕ 0 of M ⊕N by virtueof M ∼→M ⊕ 0 : x 7→ (x, 0). Also it is clear that N is isomorphic tothe quotient (M ⊕N)/(M ⊕ 0) under the isomorhism

N ∼→ M ⊕N/M ⊕ 0 : y 7→ (0, y) + (M ⊕ 0)

Thus regarding M ⊕0 as a submodule of M ⊕N we know that M ⊕Nis ? if and only if both M ⊕0 and (M ⊕N)/(M ⊕0) are ?. But due tothe isomorphisms this again is equivalent to both M and N being ?.

(ii) It is really easy to see, that M/(P ∩Q) can be emdedded into (M/P )⊕(M/Q) by virtue of the following map

M/P ∩Q → M/

P ⊕M/

Q : x+ (P ∩Q) 7→ (x+ P, x+Q)

Because, if x + P = y + P and x + Q = y + Q then we have bothx− y ∈ P and x− y ∈ Q. And this means x− y ∈ P ∩Q which yieldsx+ (P ∩Q) = y+ (P ∩Q). Hence the above map is injective. As bothM/P and M/Q are ? so is (M/P )⊕ (M/Q) according to (iv). But asM/(P ∩Q) is isomorphic to a submodule of (M/P )⊕ (M/Q) and thisisomorphic copy is ? according to (i), so is M/(P ∩Q).

(v) As ϕ : M N is surjective, the first isomorphism theorem (3.51.(ii))yields an isomorphism M/kn(ϕ) ∼→N by virtue of x+ kn(ϕ) 7→ ϕ(x).In particular N is isomprphic to a quotient module M/kn(ϕ) of M .Now, as M is ?, we find that M/kn(ϕ) is ? due to (i) and hence N is? because of the isomorphy.

(iii) As P and Q are ? by (iv) we know that P ⊕ Q is ?. But clearlythere is a surjective R-module homorphism P ⊕Q P +Q by letting(x, y) 7→ x+y. The surjectivity is trivial and the R-linearity is obvious:a(x, y) = (ax, ay) 7→ ax + ay = a(x + y) and (u, v) + (x, y) = (u +x, v+ y) 7→ (u+x) + (v+ y) = (u+ v) + (x+ y). Hence by (v) we findthat P +Q is ? again, as it is an epimorphic image of P ⊕Q.

(vi) Since M is finitely generated we can choose some generators xk ∈M , that is M = lh(x1, . . . , xn). Then we clearly have an R-moduleepimorphism by letting

Rn M : (a1, . . . , an) 7→n∑i=k

akxk

As R is ? as a ring it also is ? as an R-module (in fact this evenis equivalent). Using (iv) and induction on n it is clear that Rn =R⊕R⊕ · · · ⊕R (n-times) is ? again. Hence M is ?, by (v), as it is anepimorphic image of a ? module.

384 17 Proofs - Rings and Modules

Page 385: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vii) Let us denote the R-module homorphisms by ϕ : L → M and ψ :M N , that is we regard the following exact chain

0→ Lϕ−→ M

ψ−→ N → 0

As ψ is injective we find that kn(ψ) = im(ϕ) is isomorphic to L underL → kn(ψ) : x 7→ ϕ(x). And as ψ is surjective, by the first isomor-phism theorem (3.51.(ii)), we also get the isomorphism

M/kn(ψ)

∼→ N : y + kn(ψ) 7→ ψ(y)

By (i) we know, that M is ? if and only if both kn(ψ) and M/kn(ψ)are ?. And due to the above isomorphisms, this now is equivalent toboth L and N being ?.

2

Proof of (3.73):

• (a) =⇒ (b): given any x ∈ M with x 6= 0 we clearly know, thatP = Rx ≤m M is a submodule of M . And since x = 1x ∈ P we haveP 6= 0, which only leaves M = Rx, by assumption (a).

• (b) =⇒ (a): consider any submodule P ≤m M of M . If P = 0, thenthere is nothing to prove, else pick up some x ∈ P with x 6= 0. Nowwe get M = Rx by assumption (b), but as also M = Rx ⊆ P ⊆ Mthis implies P = M .

• (b) =⇒ (c): choose any x ∈ M with x 6= 0, which is possible sinceM 6= 0 is non-zero. By assumption (b) we then have M = Rx. Nowlet m := ann(x) i R. Then by (3.16.(ii)) we have the followingisomorphy of R-modules

Φ : R/

ann(x)∼→ M : b+ ann(x) 7→ bx

Now suppose a i R is an ideal of R such that m ⊆ a. Then theabove isomorphy transfers a/m to a submodule P = Φ(a) ≤m of M .But as (b) implies (a) this means P = 0 or P = M . In case P = 0 wehave a = m and in case P = M we have a = R. This shows that meven is a maximal ideal of M .

• (c) =⇒ (a): suppose Φ : R/m ∼→M is that isomorphism of R-modulesand consider some submodule P ≤m M of M . Then a := Φ−1(P ) ≤m

R/m is an R-submodule of R/m. As R is commutative it is clear thata even is an ideal of R/m. But as m is maximal, the only ideals ofR/m are 0 and R/m itself. In case a = 0 we see P = Φ(a) = 0 and incase a = R/m we find P = im(Φ) = M . Hence 0 and M are the onlysubmodules of M , as has been claimed in (a).

2

Proof of (3.76):For the moment being - as long as we have not proved that the length of M

385

Page 386: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

is uniquely determined - let us denote by `(M) the minimum length of of acomposition series of M

`(M) := min r ∈ N | (M0,M1, . . . ,Mr) composition series of M

Also, given any two submodules U and V ≤m M , we denote the set of allchains of submodules that start in P and end in Q by C(U, V ), that is

C(U, V ) :=

(P0, P1, . . . , Pr)

∀ i ∈ 0 . . . r : Pi ≤m MU = P0 ⊂ P1 ⊂ . . . ⊂ Pr = V

And we say that two chains (P0, P1, . . . , Pr) and (Q0, Q1, . . . , Qs) in C(U, V )are equivalent, written as (P0,1 , . . . , Pr) ∼ (Q0, Q1, . . . , Qs), iff r = s andthere is some permutation σ ∈ Sr such that for any i ∈ 1 . . . r we have thefollowing isomorphy of the quotient modules

Pi/Pi−1

∼=mQσ(i)

/Qσ(i)−1

And in this case we say that σ assigns the equivalence of (P0, P1, . . . , Pr)and (Q0, Q1, . . . , Qr). Thereby ∼ truly is an equivanence relation: clearly(P0, P1, . . . , Pr) ∼ (P0, P1, . . . , Pr) is assigned by the identity 11 ∈ Sr. Andif (P0, P1, . . . , Pr) ∼ (Q0, Q1, . . . , Qs) is assigned by σ ∈ Sr then the re-verse relation (Q0, Q1, . . . , Qs) ∼ (P0, P1, . . . , Pr) is assigned by the inversepermutation σ−1 ∈ Sr. And finally, if (N0, N1, . . . , Nr) ∼ (P0, P1, . . . , Ps)is assigned by σ and (P0, P1, . . . , Ps) ∼ (Q0, Q1, . . . , Qt) is assigned by τthen we also get the equivalency (N0, N1, . . . , Nr) ∼ (Q0, Q1, . . . , Qt) that isassigned by the composition τσ, since

Ni/Ni−1

∼=mPσ(i)

/Pσ(i)−1

∼=mQτ(σ(i))

/Pτ(σ(i))−1

(i) Let (M0,M1, . . . ,Mr) be a composition series of M of minimal lengthr = `(M) and denote Pi := Mi ∩ P . Let us also denote

ϕi : Pi →Mi/Mi−1

: x 7→ x+Mi−1

then it is clear that kn(ϕi) = Mi−1 ∩ P = Pi−i. Hence we find anR-module-monorphism (this is the first isomorphism theorem)

Φi : Pi/Pi−1

→ Mi/Mi−1

: x+ Pi−1 7→ x+Mi−1

That is Pi/Pi−1 is isomorphic to a submodule of Mi/Mi−1. But asMi/Mi−1 is a simple module this implies Pi/Pi−1 = 0 (which yieldsPi = Pi−1) or Φi even is an isomorphism (which yields that Pi/Pi−1 issimple again). Altogether

Pi = Pi−1 or Φi is an isomorphism

Let us now omit those Pi with Pi = Pi−1, that is if we take to theindex set I := i ∈ 1 . . . r | Pi 6= Pi−1 and q := (#I)−1 then we mayrenumber the index set I = α(0), α(1), . . . , α(q) with 0 = α(0) <

386 17 Proofs - Rings and Modules

Page 387: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

α(1) < · · · < α(q) and thereby obtain a composition series of P

0 = Pα(0) ⊂ Pα(1) ⊂ . . . ⊂ Pα(q) = P

Thereby Pα(q) = Mr ∩ P = M ∩ P = P . Hence we have obtaineda composition series of P of length q ≤ r. Suppose we had q = r,then we did not omit any i ∈ 0 . . . r, that is Φi is an isomorphsim forany i ∈ 1 . . . r. Then we can prove Pi = Mi for any i ∈ 0 . . . r byinduction on r: Clearly P0 = 0 = M0 and if Pi−1 = Mi−1 then, as Φi

is surjective, it is just the identity map on Mi/Mi−1. In particular weget P = M ∩ P = Mr ∩ P = Pr = Mr = M which contradicts theassumption P 6= M .

(iii) Recall that a composition series (M0,M1, . . . ,Mr) of M is an ele-ment of C(0,M) such that every quotient Mi/Mi−1 is simple. Wewill now prove, that any two composition series (M0,M1, . . . ,Mr)and (N0, N1, . . . , Ns) of M are equivalent. Without loss of general-ity let us assume r ≤ s, then we will use induction on the length rof the shorter chain. To be precise the induction hypothesis will be:if (M0,M1, . . . ,Mr) and (N0, N1, . . . , Ns) are two composition seriesof some module Mr = Ns of length r ≤ n, then (M0,M1, . . . ,Mr) ∼(N0, N1, . . . , Ns) are equivalent already.

For n = r = 0 we have 0 = M0 = Mr = M , that is M = 0 and hences = 0, too. Thus for n = 0 there is nothing to prove. Also if n = r = 1then M = M/0 = M1/M0 and hence M is simple. Therefore (0,M) isthe only composition series of M , as M admits no other submodules.So from now on we only have to regard r = n ≥ 2 and can assumethat the induction hypothesis is already proved for length n− 1.

As a first case let us assume Mr−1 ⊆ Ns−1. Then we find the in-clusions Mr−1 ⊆ Ns−1 ⊆ M = Mr. But as Mr/Mr−1 is simple thisalready means Mr−1 = Ns−1 as Ns−1 6= Ns = M = Mr. Hence

(M0,M1, . . . ,Mr−1) ∼ (N0, N1, . . . , Ns−1)

by induction hypothesis. But as Mr/Mr−1 = M/Mr−1 = M/Ns−1 =Ns/Ns−1 even is an equality of R-modules it is clear that this equiva-lence extends to an equivalency of the same chains, appended by M

(M0,M1, . . . ,Mr−1,Mr) ∼ (N0, N1, . . . , Ns−1, Ns)

Likewise, as a second case, we suppose Ns−1 ⊆ Mr−1 then the inclu-sions Ns−1 ⊆ Mr−1 ⊆ M = Ns imply Mr−1 = Ns−1 as Ns/Ns−1 issimple and Mr−1 6= Mr = M = Ns. Thus this case ends up in thesame equivalence, as the first case.

So in the last case we may assume Mr−1 ⊆/ Ns−1 and Ns−1 ⊆/ Mr−1.Then we find the inclusions Mr−1 ⊂ Ms−1 + Ns−1 ⊆ M = Mr suchthat we get the equality Mr−1 +Ns−1 = M . Now let P := Ns−1 6= Mand Pi := Mi ∩P as in (i) above. As we have seen there is some q < rsuch that α(0) < α(1) < · · · < α(q) satisfies

α(0), α(1), . . . , α(q) = I = i ∈ 1 . . . r | Pi 6= Pi−1

387

Page 388: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Suppose we had Pr−1 = Pr = Mr∩P = P , then Mr−1∩P = Pr−1 = Pimplies Ns−1 = P ⊆ Mr−1 which has already been dealt with. Hencewe have Pr 6= Pr−1 that is r ∈ I and hence necessarily α(q) = r. Letus finally abbreviate Qi := Pα(i) = Mα(i) ∩ P , then Qq = Pα(q) =Pr = Mr ∩ P = P = Ns−1. Also, as Pr−1 6= Pr we find from theconstruction of the α(i) that Qq−1 = Pr−1 (even though it might wellbe, that Pr−2 = Pr−1 but this does not bother us). As we have seenin (i) the Qi then form a composition series of P = Ns−1

0 = Q0 ⊂ Q1 ⊂ . . . ⊂ Qq = Ns−1

That is (Q0, Q1, . . . , Qq−1, Qq) = (Q0, Q1, . . . , Qq−1, Ns−1). And as asq ≤ r−1 we may use the induction hypothesis to obtain the equivalency(Q0, Q1, . . . , Qq−1, Ns−1) ∼ (N0, N1, . . . , Ns−1). In particular we mayappend these chains by Ns to finally arrive at

(Q0, Q1, . . . , Qq−1, Ns−1, Ns) ∼ (N0, N1, . . . , Ns−1, Ns)

We will now prove the equivalency of (Q0, Q1, . . . , Qq−1,Mr−1,Mr)and (Q0, Q1, . . . , Qq−1, Ns−1, Ns) which will be assigned by the trans-position τ = (q− 1 q). That is we will prove the following isomorphies

Mr−1/Qq−1

∼→ Ns/Ns−1

: x+Qq−1 7→ x+Ns−1

Ns−1/Qq−1

∼→ Mr/Mr−1

: x+Qq−1 7→ x+Mr−1

The injectivity is clear from the definition, if x ∈ Mr−1 with x +Ns−1 = 0 + Ns−1 then x ∈ Mr−1 ∩ Ns−1 = Qq−1. And likewise,if x ∈ Ns−1 with x + Mr−1 = 0 + Mr−1 then x ∈ Ns−1 ∩Mr−1 =Qq−1 again. Hence Mr−1/Qq−1 is isomorphic to a submodule of thesimple module Ns/Ns−1. But Mr−1 = Qq−1 = Mr−1 ∩ P would meanMr−1 ⊆ P = Ns−1 which has been dealt with in the first case. Hencehere x + Qq−1 7→ x + Ns−1 is an isomorphism. Likewise Ns−1/Qq−1

is isomorphic to a submodule of the simple module Mr/Mr−1 butNs−1 = Qq−1 = Mr−1 ∩ Ns−1 would imply Ns−1 ⊆ Ns−1 which hasbeen dealth with in the second case. Hence x + Qq−1 7→ x + Mr−1 isan isomorphism, as well.

But as q ≤ r−1 we may use the induction hypothesis once more to getthe equivalency (Q0, Q1, . . . , Qq−1,Mr−1) ∼ (M0,M1, . . . ,Mr−2,Mr−1).And again this can be appended, to (Q0, Q1, . . . , Qq−1,Mr−1,Mr) ∼(M0,M1, . . . ,Mr−2,Mr−1,Mr). So summarizing all the results we havefinally proved, that

(N0, N1, . . . , Ns−2, Ns−1, Ns) ∼ (Q0, Q1, . . . , Qq−1, Ns−1, Ns)

∼ (Q0, Q1, . . . , Qq−1,Mr−1,Mr)

∼ (M0,M1, . . . ,Mr−2,Mr−1,Mr)

(ii) Let (P0, P1, . . . , Pk) be a composition series of P and (Q0, Q1, . . . , Qn)be a composition series of M/P . Due to the correspondence theo-rem any submodule Qi of M/P is of the form Qi = Mi/P for some

388 17 Proofs - Rings and Modules

Page 389: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

submodule Mi ≤m M with Pk = P ⊆ M . Then we get

M1/Pk

= M1/P = Q1

∼=mQ1/

0 = P1/P0

That is: as P1/P0 is simple, so is M1/Pk. Also due to the thirdisomorphism theorem (3.51) we also find for any i ∈ 1 . . . n

Qi/Qi−1

= Mi/P/Mi−1/P

∼→ Mi/Mi−1

where this isomorphism is given by (x+ P ) + (Mi−1/P ) 7→ x+Mi−1

which is not important however. Thereby we have found, that anyMi/Mi−1 is simple again, such that we can assemble a compositionseries of M by letting (P0, P1, . . . , Pk,M1, . . . ,Mn). In particular M isof finite length and we have found the equality

`(M) = k + n = `(P ) + `(M/

P

)It remains to prove that conversely, if M has finite length, then P andM/P have finite length, too. The case of P has already been shownin (i), it remains to prove this for M/P . Hence let (M0,M1, . . . ,Mr)be a composition series of M , then we define

Qi := Mi + P/P ≤m

M/P

It is clear that Q0 = P/P = 0 and Qr = (M + P )/P = M/P . Andthe third isomorphism theorem (3.51) yields the following isomorphy

Mi + P/Mi−1 + P

∼=m(Mi + P )/P/

(Mi−1 + P )/P = Qi/Qi−1

Explictly this isomorphism is x + (Mi−1 + P ) 7→ (x + P ) + Qi−1 butthis doesn’t matter, really. Let us now regard the following map

Φi : Mi/Mi−1

→ Mi + P/Mi−1 + P : x+Mi−1 7→ x+ (Mi−1 + P )

It is clear that Φi is surjective, given any x ∈ Mi and p ∈ P we seethat x+Mi−1 7→ x+(Mi−1 +P ) = x+p+(Mi−1 +P ). And the kernelof Φi is obviously given to be

kn(Φi) = Mi ∩ (Mi−1 + P )/Mi−1

≤mMi/Mi−1

But as Mi/Mi−1 is simple this either means kn(Φi) = 0 (and henceΦi is injective, thereby bijective) or kn(Φi) = Mi/Mi−1 (and therebyMi ∩ (Mi−1 + P ) = Mi which means Mi ⊆ Mi−1 + P such thatMi + P = Mi−1 + P which implies Qi = Qi−1). Altogether we have

Mi + P = Mi−1 + P or Φi is an isomorphism

Let us now take I := 1 . . . r | Qi 6= Qi−1 , then for any i ∈ I we have

389

Page 390: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

that Φi is an isomorphism, such that

Mi/Mi−1

∼=mMi + P/

Mi−1 + P∼=m

Qi/Qi−1

That is for any i ∈ I we find that Qi/Qi−1 is a simple module. Andfor any j 6∈ I we have seen Qj = Qj−1. Therefore we can pick upsome indices 0 ≤ α(0) < α(1) < · · · < α(n) ≤ r such that I =α(0), α(1), . . . , α(n) and have found a composition series of M/Pby (Qα(0), Qα(1), . . . , Qα(n)). In particular M/P has finite length againand the equivalency holds true.

(v) Without loss of generality we may assume P0 = 0 and Pk = M . Be-cause if P0 6= 0 or Pk 6= M then we may append the series (P0, . . . , Pk)to (0, P0, P1, . . . , Pk,M) which we will refine instead.

We will now prove this statement by induction on the length r = `(M)of M . If r = 0 we have M = 0 and hence the only series of sumbodulesof M is (0) which is a composition series already. Likewise for r =1 we know that M is simple, such that (0,M) is the only series ofsubmodules which is a composition series already. Thus let us nowassume the refinement-property has been established for any series ofsubmodules in any R-module of length r − 1 or less.

If k ≤ 1 then any composition series of M appends (P0, P1) = (0,M).And as M was assumed to have finite length, there is some compositionseries. Thus we may regard the case k ≥ 2. Then 0 ⊂ P1 ⊂ M suchthat by (i) we have `(P1) < `(M) = r and by (ii) also `(M/P1) =`(M) − `(P1) < `(M) = r. By induction hypothesis we hence geta refinement (L0, L1, . . . , Lp) of (0, P1), that is L0 = 0 and Lp = P1.And denoting Qi := Pi/P1 for i ∈ 1 . . . k, the induction hypothesis alsoyields a refinement (N1, N2, . . . , Nq) of (Q1, . . . , Qk). Thereby Ni is ofthe form Ni = Mi/P1 for some Mi ≤m M due to the correspondencetheorem of modules. Therefore

Mα(i)/P1

= Nα(i) = Qi = Pi/P1

which yields Mα(i) = Pi for i ∈ 1 . . . k. We have seen in (ii) that(L0, L1, . . . , Lp,M2, . . . ,Mq) is a composition series of M . And any Pican be found inside of this composition series: P0 = L0, P1 = Lp = M1

and Pi = Mα(i) for any i ∈ 2 . . . k. Hence (L0, L1, . . . , Lp,M2, . . . ,Mq)is the refinement of (P0, P1, . . . , Pk) we sought.

(iv) It is clear that ≤ is a partial order on C(0,M), so we only have toprove the three equivalencies. (a) =⇒ (b): consider any refinement(M0,M1, . . . ,Mr) ≤ (N0, N1, . . . , Ns), then by (v) we can choose arefinement of (N0, N1, . . . , Ns) to a composition series (C0, C1, . . . , Ct)of M . By (ii) the composition series has length t = `(M) = r. Andthereby r ≤ s ≤ t = r implies r = s, which means (M0,M1, . . . ,Mr) =(N0, N1, . . . , Ns). (b) =⇒ (c): now suppose (M0,M1, . . . ,Mr) wasno composition series of M , that is there is some i ∈ 1 . . . r and someQ ≤m M such that Mi−1 ⊂ Q ⊂ Mi. Then we already had a true

390 17 Proofs - Rings and Modules

Page 391: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

refinement of (M0,M1, . . . ,Mr) by

(M0, . . . ,Mr) < (M0, . . . ,Mi−1, Q,Mi, . . . ,Mr)

such that (M0,M1, . . . ,Mr) could not have been maximal. Finally theimplication (c) =⇒ (a) has already been proved in (iii).

(vi) Consider any ϕi : Mi−1 →Mi then, by the first isomorphism theorem,we have Mi−1/kn(ϕi)

∼→ im(ϕi) and hence by (ii)

`(kn(ϕi)) + `(im(ϕi)) = `(kn(ϕi)) + `(Mi−1

/kn(ϕi)

)= `(Mi−1)

Now recall that the i-th homology module is defined byHi := kn(ϕi+1)/kn(ϕi),then the claim follows by easy computation using (ii) again

n∑i=0

(−1)i`(Hi) =n∑i=0

(−1)i`(

kn(ϕi+1)/kn(ϕi)

)=

n∑i=0

(−1)i [`(kn(ϕi+1))− `(kn(ϕi))]

=

n+1∑i=1

(−1)i−1`(kn(ϕi+1))−n∑i=0

(−1)i`(kn(ϕi))

=

n∑i=1

(−1)i−1 [`(kn(ϕi)) + `(kn(ϕi))]

+(−1)n`(kn(ϕn+1))− `(im(ϕ0))

=n∑i=1

(−1)i−1`(Mi) + (−1)n`(Mn)− `(0)

=n∑i=0

(−1)i`(Mi)

2

Proof of (17):Let us denote the quotient ring Q := R/ann(M). If now M is an R-module,then M can also be turned into a Q-module under the scalar multiplica-tion (b + ann(x)) := bx as has been shown in (3.16.(iii)). And from thisconstruction it is clear that (for any X ⊂M)

lhR(X) = lhQ(X)

In particular the Q-submodules of M are precisely the R-submodules of M .And therefore it is equivalent wether M is noetherian (resp. artinian) as anR-module or an an Q-module. This will be used in the proof of (i) and (ii).

(i) If M is noetherian, then in particular it is finitly generated by property(c) of noetherian modules. And also, if M is noetherian, then Mn isnoetherian by (3.71.(iv)). But by (3.16.(vi)) we find an embeddingQ → Mn of R-modules (where n = rank(M)). That is, there is anisomorphic copy of Q as an R-submodule of Mn. As Mn is noetherian

391

Page 392: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

this implies, that Q is a noetherian R-module, hence a noetherianQ-module and hence a noetherian ring.

Conversely supposeQ is a noetherian ring and M a finitly generated R-module. Then Q is finitely generated as an Q-module (using the samegenerators) and hence a noetherian Q-module by virtue of (3.71.(vi)).But this again means that M is a noetherian R-module, as well.

(ii) If M is artinian, then Mn is artinian by (3.71.(iv)). But by (3.16.(vi))we find an embedding Q → Mn of R-modules (where n = rank(M),which is finite by assumption). That is, there is an isomorphic copyof Q as an R-submodule of Mn. As Mn is artinian this implies, thatQ is an artinian R-module, hence an artinian Q-module and hence anartinian ring.

Conversely suppose Q is an artinian ring and M a finitly generated R-module. Then Q is finitely generated as an Q-module (using the samegenerators) and hence an artinian Q-module by virtue of (3.71.(vi)).But this again means that M is an artinian R-module, as well.

(iii) If M is finitely generated and artinian then Q = R/ann(M) is anartinian ring by virtue of (ii). But from (2.37.(ii)) we know that anyartinian ring already is noetherian. Hence Q is noetherian and M isfinitely generated, such that M is noetherian by (i).

(iv) Let us regard Kn := kn(ϕn) ≤m M , then it is clear that the Kn forman ascending chain of submodules of M

K0 ⊆ K1 ⊆ K2 ⊆ . . .

As M is noetherian this chain has to stabilize at some point s ∈ N,that is Ks+1 = Ks. Now consider any y ∈ kn(ϕ), as ϕ is surjectiveso is ϕs : M → M . Hence there is some x ∈ M such that y = ϕs(x).Now 0 = ϕ(y) = ϕ(ϕs(x)) = ϕs+1(x) that is x ∈ Ks+1 = Ks. That isy = ϕs(x) = 0 which implies that ϕ already is injective, as well.

(v) Let us regard In := im(ϕn) ≤m M , then it is clear that the In forma descending chain of submodules of M

I0 ⊇ I1 ⊇ I2 ⊇ . . .

As M is artinian this chain has to stabilize at some point s ∈ N,that is Is+1 = Is. Now consider any y ∈ M , as πs(y) ∈ Is = Is+1

there is some x ∈ M such that ϕs(y) = ϕs+1(x) = ϕs(ϕ(x)). As ϕ isinjective, so is πs : M → M and hence we find y = ϕ(x). As y hasbeen arbitrariy this means that ϕ already is surjective.

(vi) Let us first suppose that r = `(M) <∞ that is M admits a composi-tion series (M0,M1, . . . ,Mr). We will prove that M is noetherian andartinian by induction on r. If r = 0 then M = 0 so there is nothingto prove. And if r = 1 then M is simple and hence 0 ⊆ M is theonly chain of submodules of M . In particular M is noetherian and ar-tinian. Let now r be aritrary and suppose the claim has been provedfor any module of length less than r. As M1 is simple it is noetherianand artinian as we have argued for r = 1. Also by (??.(ii)) we have

392 17 Proofs - Rings and Modules

Page 393: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

`(M/M1) = `(M) − `(M1) < `(M) = r so by induction hypothesisM/M1 is noetherian and arininan too. But (3.71.(i)) now implies thatM is noetherian and artinian, as well.

Conversely suppose M is both, artinian and noetherian. Then we haveto prove that M is of finite length. To do this define the set K of allthose submodules of M that have a composition series

K := K ≤m M | `(K) <∞

As 0 ∈ K we see that K is non-empty and hence (as M is noetherian)has a maximal elementK∗ ∈ K. By constructionK∗ has a compositionseries, say (K0,K1, . . . ,Kr). Now suppose we had K∗ 6= M , then wetake to the set L of all submodules of M that strictly contain K∗

L := L ≤m M | K∗ ⊂ L

As K∗ 6= M we have M ∈ L such that L is non-empty. Hence (as Malso is artinian) there is a minimal element L∗ ∈ L. Now suppose thereis some submodule P ≤m M with K∗ ⊆ P ⊆ L∗. If P 6= K∗ thenP = L∗ as L∗ is minimal with this property. That is L∗/K

∗ is simpleand hence we find a composition series of L∗ by (K0,K1, . . . ,Kr, L∗).But remember, K∗ has been maximal among those submodules of Mthat admit a composition series so it cannot be extended to K∗ ⊂ L∗.Hence the assumption K∗ 6= M has to be false and in particular M =K∗ itself has a composition series.

(vii) First of all (e) =⇒ (d), as for any i ∈ 1 . . . n we can take Ni :=M1 ⊕ · · · ⊕Mi ⊕ 0 ⊕ · · · ⊕ 0 ≤m M . To be precise also let N0 :=0 ≤m M then for any i ∈ 1 . . . n we clearly have the isomprphyMi

∼→Ni/Ni−1 : xi 7→ xi +Mi−1 and in particular Mi/Mi−1 is simple,by assumption (e). Therefore (N0, N1, . . . , Nr) is a composition seriesof M such that `(M) = r <∞ is finite.

The implications (d) =⇒ (a) and (d) =⇒ (b) have already beenshown in (vi). Also (b) =⇒ (c) is clear, as any submodule of anoetherian module is finitely generated, in particular M itself.

We will now prove (c) =⇒ (e): as M is finitely generated we may pickup some xk ∈ M such that M = lh(x1, . . . , xn). By assumption Mis semi-simple, that is there are some simple submodules Mi ≤m Msuch that

M =⊕i∈I

Mi

That is any xk can be written as a finite sum of some elements xk,i ∈Mi and hence there is some finite subset Ω(k) ⊆ I such that

xk =∑i∈Ω(k)

xk,i

As n ∈ N is finite and any Ω(k) ⊆ I is finite, so is the union Ω :=Ω(1)∪ · · · ∪Ω(n) of these sets. And as the xk generate M we see that

393

Page 394: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

M in fact is a finite direct sum, as claimed

M =⊕i∈Ω

Mi

Hence we have established the chain (e) =⇒ (d) =⇒ (b) =⇒ (c)=⇒ (e). And as we alredy know (d) =⇒ (a) as well, it suffices toprove (a) =⇒ (e): as M is semi-simple we have M = ⊕iMi as above.Thereby we may omit all those i ∈ I with Mi = 0. Now suppose Iwas infinite, then there was some injection α : N → I. Then we takeI(k) := I \ α(j) | j ≤ k for any k ∈ N and let

Nk :=⊕i∈I(k)

Mi

Then it is clear that the Nk form a strictly descending chain of sub-modules of M , that is N0 ⊃ N1 ⊃ N2 ⊃ . . . in contradiction toM being artinian. Hence I could not have been infinite such that (e)holds true.

If now M even is a vector-space, that is R is a field, then clearly M isfinitely generated if and only if M is finite dimensional. Hence (f) isequivalent to (c) and therefore to any other of the statements, as well.

(viii) Let us denote the set of all ideal-induced submodules aM of M suchthat M/aM is noetherian by P

P :=aM | a i R,

M/aM is not noetherian

Suppose M was not noetherian, then 0 ∈ P and hence P is not empty.By assumption and (2.28) P then contains a maximal element a∗M .Let us now denote

M ′ := M/a∗M and A := R/

ann(M ′)

Then A is a commutative ring again and we can regard M ′ as an A-module. Note that by construction M ′ is not noetherian. Now, givenany ideal a i A in A, by the correspondence theorem a is of the forma = b/ann(M ′) for some b i R with ann(M ′) ⊆ b. We will nowprove

a∗ ⊆ b

To do this regard any a ∈ a∗ then for any x ∈ M we get ax ∈ a∗M .That is a(x+a∗M) = ax+a∗M = 0+a∗M and hence a ∈ ann(M ′) ⊆ b.As a has been arbitrary this proves a∗ ⊆ ann(M ′) ⊆ b. Now, byconstruction it is clear that aM ′ = bM/a∗M . And thereby the thirdisomorphism theorem yields

M ′/aM ′ = M/a∗M/

bM/a∗M∼=m

M/bM

But as a∗ ⊆ b and a∗ has been maximal this implies either a∗ = b

394 17 Proofs - Rings and Modules

Page 395: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(and hence a = 0) or M ′/aM ′ is noetherian. Now let

Q :=Q ≤m M ′ | annA

(M ′/

Q

)= 0

Recall A = R/annR(M ′) and that the scalar multiplication of M ′ (withA) is defined by (a+ ann(M ′))(x+ a∗M) = ax+ a∗M . Therefore wehave annA(M ′) = annR(M ′)/annR(M ′) = 0 ⊆ A such that 0 ∈ Q. Inparticular Q is non-empty.

If now (Qi) ⊆ Q is a chain in Q then we take Q :=⋃iQi to be the

union of the Qi. As (Qi) is a chain Q is a submodule of M ′ again.Consider any a ∈ A with a ∈ annA(M ′/Q) = 0, that is a(M ′/Q) = 0or in other words aM ′ ⊆ Q. As M is finitely generated, so is M ′

say M ′ = lh(x1, . . . , xn), then we have axk ∈ Q for any k ∈ 1 . . . nand hence axk ∈ Qi(k) for some i(k) ∈ I. As the Qi form a chainwe may take Qi to be the maximum of Qi(1) to Qi(n). Then for anyk ∈ 1 . . . n we have axk ∈ Qi. But as the xk generate M ′ this impliesaM ′ ⊆ Qi and therefore a ∈ ann(M ′/Qi) = 0 such that a = 0. Thatis annA(M ′/Q) = 0 and hence Q ∈ Q again. Hence any chain (Qi)of Q has an upper bound Q and hence the lemma of Zorn yields amaximal element Q∗ of Q. Now take to

M ′′ := M ′/Q∗

If a i A is any ideal with a 6= 0 then by (3.30.(v)) we have aM ′′ =(aM ′+Q∗)/Q∗. Therefore the third isomorphism theorem again yields

M ′′/aM ′′ = M ′/Q∗/

(aM ′ +Q∗)/Q∗∼=m

M ′/aM ′ +Q∗

As we have shown before we have a∗ ⊆ b for a = b/ann(M ′). But asa 6= 0 and a∗ has been maximal we find that M ′/aM ′ is noetherian.But as we also have an epimorphism

M ′/aM ′ M ′/

aM ′ +Q∗

This implies that M ′/(aM ′+Q∗) is noetherian. By the above isomor-phism, this makes M ′′/aM ′′ noetherian. So at last we have arrived atan A-module N := M ′′ that satisfies the following two properites

∀ a i A : a 6= 0 =⇒ N/aN is noetherian

∀Q ≤m N : Q 6= 0 =⇒ ann(N/

Q

)6= 0

We will now prove that N = M ′′ is noetherian by showing, that anysubmodule of N is finitely generated: So suppose Q ≤m N is anysubmodule of N . If Q = 0 then there is nothing to show, else there issome a ∈ A with a 6= 0 such that a(N/Q) = 0, that is aN ⊆ Q. But asaA 6= 0 we know thatN/aN is noetherian and henceQ/aN ≤m N/aNis finitely generated. And as M is finitely generated, so are N = M ′′

and aN . But as both Q/aN and aN are finitely generated (3.59) tellsus that Q is finitely generated.

395

Page 396: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

By now we have established that N = M ′′ = M /Q∗ is noetherian.Therefore (i) implies that A/ann(N) is a noetherian ring. Yet by con-struction we have ann(N) = ann(M ′/Q∗) = 0 such that A/ann(N) =A. That is A is a noetherian ring. Yet M ′ is a finitely generated A-module such that (3.71.(vi)) implies that M ′ = M/a∗M is noetherian.But we have constructed M ′ is such a way, as to be not noetherian.This contradiction can only be solved, if M has been noetherian fromthe start.

2

Proof of (3.48):

(i) Let B := adj(t 11 n−A) ∈ matn(R[t]) be the adjoint matrix of t 11 n−A,then it has been shown (as R[t] is a commutative ring) that

(t 11 n −A)B = det(t 11 n −A) 11 n = c(t) 11 n

That is we have n2 equations in the polynomial ring R[t] encoded in asingle equation for n× n matrices over R[t]. And by construction thedegree of all these polynomial equations is n as well. Let us now write

B = adj(t 11 n −A) =n−1∑k=0

tkBk

for some Bk ∈ matn(R) then the above equation takes the followingform

c(t) 11 n = (t 11 n −A)

n−1∑k=0

tkBk =

n−1∑k=0

tk+1Bk −n−1∑k=0

tkABk

= tnBn−1 +n−1∑k=1

tk(Bk−1 −ABk)−AB0

Comparing coefficients yields c[n] 11 n = Bn−1 and hence Bn−1 = 11 n asc(t) is monic c[n] = 1. Also AB0 = −c[0] 11 n = −(−1)n det(A) 11 n. Alsothis is no wonder, as B0 = −adj(A). Also for any k ∈ 1 . . . (n− 1)

c[k] 11 k = Bk−1 −ABk

Let us now multiply all these equations with Ak from the left, that isBn−1A

n = An and for any k ∈ 1 . . . n− 1 we get

c[k]Ak = AkBk−1 −Ak+1Bk

Then c(A) =∑

k c[k]Ak can be replaced in every coefficient c[k]Ak

396 17 Proofs - Rings and Modules

Page 397: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

seperately to find a telescopic sum, that vanishes identically

c(A) =n∑k=0

c[k]Ak = An +n−1∑k=1

c[k]Ak + c[0] 11 n

= An +n−1∑k=1

(AkBk−1 −Ak+1Bk) + c[0] 11 n

= An +

n−1∑k=1

AkBk−1 −n∑k=2

AkBk−1 + c[0] 11 n

= An +AB0 −AnBn−1 + c[0] 11 n

= An − c[0]A−An + c[0] 11 n = 0

(ii) As M is finitely generated we find some xk ∈ M such that M =lh(x1, . . . , xn) where n = rank(M). Now, as ϕ(xj) ∈M again, for anyj ∈ 1 . . . n we also find some ai,j ∈ R such that

ϕ(xj) =

n∑i=1

ai,jxi

Let A := (ai,j) ∈ matn(R) be the n × n matrix of those coefficients.Then it is clear that for any x =

∑j bj ∈M we get

ϕ(x) =n∑j=1

bjϕ(xj) =n∑j=1

bj

n∑i=1

ai,jxi

=

n∑i=1

(n∑i=1

ai,jbj

)xi =

n∑i=1

[Ab]i xi

where we denoted b = (b1, . . . , bn) ∈ Rn the coefficient-vector of x.And likewise ϕk(x) =

∑i[A

kb]i xi. Now let f(t) := det(t 11 n − A) bethe characteristic polynomial of A again. Then we have shown in (i)that f(A) = 0. And hence for any x =

∑j bj ∈M we find

f(ϕ)(x) =n∑k=0

f [k]ϕk(x) =n∑k=0

f [k]n∑i=1

[Akb]i xi

=n∑i=1

(n∑k=0

f [k][Akb]i

)xi =

n∑i=1

[n∑k=0

f [k]Akb

]i

xi

=

n∑i=1

[f(A)b]i xi =

n∑i=1

0xi = 0

2

Proof of (3.32):

(i) Let us denote J := i ∈ I | mi 6∈ aM , then we have to prove thatmj + aM | j ∈ J is an (R/a)-basis of M/aM . So we first have tocheck, that this set (R/a)-linearly generates M/aM . Thus consider

397

Page 398: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

any x+aM ∈M/aM , that is x ∈M . As the mi generate M there aresome ai ∈ R (where i ∈ I and only finitely many ai 6= 0 are non-zero),such that x =

∑i aimi. And thereby we already get

x+ aM =∑i∈I

aimi + aM =∑i∈I

(ai + a)(mi + aM)

=∑i∈J

(ai + a)(mi + aM)

where the latter equality holds, as any i ∈ I\J yields xi+aM = 0+aM .It remains to prove the (R/a)-linear independence of this set. Thussuppose we are given any bj + a ∈ R/a (of which only finitely manyare non-zero) such that

0 + aM =∑j∈J

(bj + a)(mj + aM) =∑j∈J

bjmj + aM

In other terms that is∑

j bjmj ∈ aM . But as mi | i ∈ I generatesM we may use (3.30.(ii)) to find some ai ∈ a, of which only finitelymany are non-zero, such that∑

j∈Jbjmj =

∑i∈I

aimi

As the mi form an R-basis of M we may compare coefficients in thesetwo representations, which yields ai = 0 for any i ∈ I \ J and bj =aj ∈ a for any j ∈ J . Hence we have bj + a = 0 for any j ∈ J and thisis the (R/a)-linear independence of mj + aM | j ∈ J .

(i) It remains to prove that the mi+ aM are pairwise distinct, too. First,if a = R, then aM = M and hence mi ∈ aM for any i ∈ I. In this casethe basis given is empty and there is nothing to prove. So let us nowassume a 6= 0. If mi + aM = mj + aM then we have mi −mj ∈ aM .Yet as mi | i ∈ I generates M we may use (3.30.(ii)) again, to findsome ak ∈ a, of which only finitely many are non-zero, such that

mi −mj =∑k∈I

akmk

If we had i 6= j then this would yield an equation of R-linear depen-dence, simply by moving mi − mj on the left hand side and decom-posing

(ai − 1)mi + (aj + 1)mj +∑k∈I,k 6=i,j

akmk = 0

As the mi are R-linearly independant this implies all coefficients inthis equation to be 0, in particular 1 = ai ∈ a. That is a = R whichcannot occur. Hence we necesserily have i = j.

(ii) (a) =⇒ (b): let us choose a generating set m1, . . . ,mn of M witha minimal number n of elements. Suppose n ≥ 1 then, as m1 ∈ M =aM we may invoke (3.30.(ii)) to find some a1, . . . , an ∈ a such that

398 17 Proofs - Rings and Modules

Page 399: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

m1 = a1m1 + · · ·+ anmn. Let us now write this equation in the form

(1− a1)m1 = a2m2 + · · ·+ anmn

As a1 ∈ a ∈ jacR we know by (2.6) that 1 − a1 ∈ R∗ is invertible.And hence m1 is contained in the linear hull of m2, . . . ,mn by

m1 =n∑i=2

(1− a1)−1aimi

That is even m2, . . . ,mn generates M in contradiction to the min-imality of the number n of generators. So our assumption n ≥ 1 hasbeen false, that is n = 0 and hence M = 0.

¬ (a) =⇒ ¬ (b): if a ⊆/ jacR then - by definition of the Jacobsonradical - there is a maximal ideal m i R of R such that a ⊆/ m.And this again means that there is some u ∈ a with u 6∈ m. Now, asM := R/m is a field, and u + m 6= 0 ∈ M there is some v + m ∈ Msuch that uv+m = (u+m)(v+m) = 1 +m. Clearly M also is finitelygenerated, by 1 + m as an R-module. Also

aM = ab+ m | a ∈ a, b ∈ R = c+ m | c ∈ R = M

Hereby the central equality is due to the following reasoning: givenc ∈ R we may take a := u and b := vc, that ab+m = uvc+m = c+m,and the converse inclusion is clear. Thus we have found a finitelygenerated R-module M with aM = M , but M 6= 0.

(ii) Alternative Version: the more advanced lemma of Dedekind allowsto prove the implication (a) =⇒ (b) in a straightforward way: as Mis finitely generated we find some a ∈ a such that (1 − a)M = 0, dueto (v). Yet as a ∈ a ⊆ jacR proposition (2.6) yields that 1− a ∈ R∗is invertible. Hence M = (1− a)M = 0 is proved already.

(iii) By (3.30.(v)) we have know that a(M/P ) equals (aM + P )/P andhence we have the following identities (by the assumptions on P )

a(M/

P

)= aM + P/

P = M/P

Yet M/P is finitely generated and a ⊆ jacR so by the lemma ofNakayama (ii) we get M/P = 0 and this is nothing but M = P .

(iv) If ϕ is surjective, then ϕ/a clearly is surjective as well - see (3.30.(vi))for details. So conversely assume ϕ/a is surjective, that is for anyy ∈ N there is some x ∈M that

y + aN = ϕ/a(x+ aM) = ϕ(x) + aN

That is v := y − ϕ(x) ∈ aN , such that y = ϕ(x) + v. As y hasbeen arbitrary this means N = imϕ + aN . Yet as N/imϕ is finitelygenerated and a ⊆ jacR this implies imϕ = N by (iii), which isnothing, but the surjectivity of ϕ.

399

Page 400: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(v) (♦) If ϕ is bijective, then ϕ/m is bijective for any ideal a ⊆ R, aswe have seen in (3.30.(vii)). Conversely: as M is finitely generated,the zero-module 0 ≤m M is cofinite and hence also imϕ ≤m M iscofinite. So by (iv) we already know, that ϕ is surjective, as ϕ/a issurjective (even bijective). But a surjective endomorphism of a finitelygenerated R-module already is bijective by (3.48).

(vi) (♦) Consider the identity-map 11 : M → M : x 7→ x on M . Clearlythis is an R-linear map satisfying im( 11 ) ⊆ M = aM . And as Mis finitely generated the Cayley-Hamilton theorem (3.48) yields somepolynomial f = tn + a1t

n−1 + · · ·+ an ∈ R[t] such that f( 11 ) = 0 andak ∈ ak for any k ∈ 1 . . . n. Now compute

f( 11 ) = 11 n + a1 11 n−1 + · · ·+ an−1 11 + an 11

= (1 + a1 + · · ·+ an) 11 = (1− a) 11

where we let a := −(a1 · · · + an) ∈ a. Now 0 = im(0) = im(f( 11 )) =im((1 − a) 11 ) = (1 − a)M . That is the a ∈ a we constructed usingCayley-Hamilton truly satisfies (1− a)M = 0.

2

Proof of (3.52):

(ii) In case of a skew-field S things are simple now: according to thedimension formula (3.41.(iii)) we know that

dim(V ) = dim(kn(ϕ)) + dim(im(ϕ))

This if ϕ is injective then we have kn(ϕ) = 0 and hence dim(kn(ϕ)) = 0which leaves dim(V ) = dim(im(ϕ)). Thus (as V is finite dimensional)by (3.40.(ix)) we find that im(ϕ) = V such that ϕ is surjective.

Conversely if ϕ is surjective, then im(ϕ) = V and hence dim(im(ϕ)) =dim(V ), which leaves dim(kn(ϕ) = 0. Therefore kn(ϕ) = 0 [if therewas some 0 6= x ∈ kn(ϕ), then x would be a linearly independentset in kn(ϕ) and hence dim(kn(π)) ≥ 1]. But of course this meansthat ϕ is injective.

So in any case we have seen surjective implies bijective and injectiveimplies bijective. And the converse implications are trivial.

(i) Dedekind-Proof: we will first present a rather short proof using thelemma of Dedekind: let us regard M as an R[t]-module by virtue of(3.47). Then by construction we get tx = ϕ(x) for any x ∈ M hence,as ϕ is surjective, for any y ∈ M there is some x ∈ M with tx = y.Hence we get tM = M , in particular aM = M for a := tR[t] i R[t].Now by then lemma of Dedekind (3.32.(vi)) there is some g ∈ a suchthat (1− g)M = 0. As g ∈ a there is some f ∈ R[t] with g(t) = tf(t).Let us define ψ := g(ϕ) ∈ end(M). Then gx = g(ϕ)(x) = ψ(x) for anyx ∈M and hence (1− f)M = 0 implies

0 = (1− tg(t))x = x− tψ(x) = x− ϕ(ψ(x))

400 17 Proofs - Rings and Modules

Page 401: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

0 = (1− gt(t))x = x− gϕ(x) = x− ψ(ϕ(x))

That is ϕ(ψ(x)) = x and ψ(ϕ(x)) = x for any x ∈ M . That is ψ isjust the inverse map of ϕ and in particular ϕ is bijective.

(i) Noether-Proof: as the proof above requires the somewhat abstractDedekind lemma, we would like to supply another proof that requiresthe theory of noetherian modules only. AsM is finitely generated thereare some xi ∈ M such that M = lh(x1, . . . , xn). Then, as ϕ(xj) ∈ Mwe can pick up some pi,j ∈ R such that

ϕ(xj) =n∑i=1

pi,jxi

Step (A): we will now prove, that M = lh(ϕ(x1), . . . , ϕ(xn)). Thatis we choose some y ∈ M arbitarily. As ϕ is surjective there is somex ∈ M such that y = ϕ(x). And hence there are some aj ∈ R suchthat x =

∑j ajxj . Now compute

y = ϕ(x) =

n∑j=1

ajϕ(xj) ∈ lh

(ϕ(x1), . . . , ϕ(xn)

)

Step (B): as the ϕ(xj) generate M we can conversely choose someqj,i ∈ R such that the xi ∈M can be written as

xi =

n∑j=1

qj,iϕ(xj)

Remember all we have to prove is the injectivity of ϕ. And to do this itsuffices to prove kn(ϕ) = 0. Hence regard some k ∈M with ϕ(k) = 0.All we have to prove is k = 0, then the injectivity is established.

Step (C): write k as k =∑

i kixi for some ki ∈ R and let A denote theZ-algebra generated by all the pi,j qi,j and ki

A := Z

n⋃i=1

n⋃j=1

pi,j , qi,j , ki

Then A is finitely generated and Z is notherian (even a PID) such thatA is a noetherian subring of R, by the Hilbert basis theorem (2.33).Also let N be the A-sub-module of M that is generated by the xi.As N is a finitely generated module over the noetherian ring N is anoetherian A-module itself

N := lhA(x1, . . . , xn)

Step (D): we will now prove that ϕ : N →M restricts to ϕ : N → N .That is we will prove that ϕ(x) ∈ N for any x ∈ N . But this is easy

401

Page 402: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

to see: if x ∈ N then x =∑

j ajxj for some aj ∈ N . Hence

ϕ(x) =

n∑j=1

ajϕ(xj) =

n∑j=1

aj

n∑i=1

pi,jxi =

n∑i=1

n∑j=1

pi,jajxi

and this is contained in N again, as pi,j ∈ A as well. Also we will provethat ϕ : N N is surjective, that is for any y ∈ N there is some x ∈N such that y = ϕ(x). If y =

∑i bixi then let x :=

∑i bi∑

j qi,j ∈ A

ϕ(x) =

n∑i=1

bi

n∑j=1

qi,jϕ(xj) =

n∑i=1

bixi = y

Step (E): as ϕ : N → N is surjective, so is ϕk : N → N . Now letKi := kn(ϕk) ≤m N . Then it is clear, that the Ki form an ascendingchain of submodules of N

K0 ⊆ K1 ⊆ K2 ⊆ . . .

But as N is noetherian this chain has to stabilize, that is there is somes ∈ N such that Ks+1 = Ks. As ϕs is surjective and k ∈ N , there issome z ∈ N such that k = ϕs(z). But now 0 = ϕ(k) = ϕ(ϕs(z)) =ϕs+1(z) implies z ∈ Ks+1 = Ks. That is k = ϕs(z) = 0 already. Hencewe have finally arrived at k = 0 which is the injectivity of ϕ : M →M .

2

Proof of (3.33):

(i) Suppose M 6= 0, as M is finitely generated, the submodule 0 ≤m Mis cofinite. Hence by (3.12) there is a maximal submodule Q ≤m Msuch that P ⊆ Q ⊂ M . As Q is maximal, M/Q is a non-zero, simpleR-module [M/Q 6= 0, as Q 6= M and by the correspondence theorem(3.14.(ii)): consider a submodule U/Q ≤m M/Q, then U ≤m Msatisfies Q ⊆ U ⊆ M and hence Q = U or U = M due to themaximality of Q]. Therefore

m := annR

(M/

Q

)is a maximal ideal m ∈ smaxR, according to (3.16.(ix)). By definitionof m we have ax + Q = a(x + Q) = 0 + Q for any a ∈ m and x ∈ M .That is ax ∈ Q and hence mM ⊆ Q. Now by assumption we getQ ⊆ M = mM ⊆ Q. But that is Q = M in contradiction to Q 6= M ,such that the assumption M 6= 0 must have been false.

(ii) By (3.30.(v)) we know that m(M/P ) equals (mM + P )/P and hencewe have the following identities (by the assumptions on P )

m(M/

P

)= mM + P/

P = M/P

for any maximal ideal m iR. Yet as P is cofinite the quotient module

402 17 Proofs - Rings and Modules

Page 403: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

M/P is finitely generated. Hence we may use (i) to find M/P = 0 andthis is nothing but M = P .

(iii) If ϕ is surjective, then ϕ/m is surjective for any ideal m ⊆ R, as wehave seen in (3.30.(vii)). Conversely if ϕ/m is surjective, then for anyy ∈ N there is some x ∈ M such that ϕ(x) + mN = y +MN . Thatis there is some z ∈ mN such that y = ϕ(x) + z ∈ im(ϕ) + mN . Andas y ∈ N has been arbitrary, this means N = im(ϕ) + mN . As imϕ iscofinite and this is tru for any maximal ideal m, we may invoke (ii) tofind imϕ = N , but this is nothing but the surjectivity of ϕ.

(iv) If ϕ is bijective, then ϕ/m is bijective for any ideal m ⊆ R, as we haveseen in (3.30.(vii)). Conversely: as M is finitely generated, the zero-module 0 ≤m M is cofinite and hence also imϕ ≤m M is cofinite. Soby (iii) we already know, that ϕ is surjective, as the ϕ/m are surjective(even bijective). But a surjective endomorphism of a finitely generatedR-module is bijective by (3.48).

2

Proof of (3.56):

(i) Suppose I and J have the same cardinality, by definition this means,that there is a bijection σ : I → J . And by construction M := R⊕I

has the canoncial basis ek = (δi,k) ∈M , where k ∈ I. That is the k-thcomponent of ek is 1, all others (i 6= k) are 0. Likewise we denote thecanonical basis of N := R⊕J by fl = (δj,l) ∈ N , where l ∈ J . Thisclearly gives rise to a well-defined homorphism of R-modules:

ϕ : M → N :∑i∈I

aiei 7→∑i∈I

aifσ(i)

As σ is surjective and fj | j ∈ J generates N we see that Φ is surjec-tive. Suppose Φ(ai) = 0, as σ is injective and fj | j ∈ J is R-linearlyindependent this means that ai = 0 for any i ∈ I. This is kn(Φ) = 0and hence Φ also is injective. Altogether Φ is an isomorphism.

It remains to prove the converse implication, that is we suppose thatM := R⊕I and N := R⊕J are isomorphic. As R is a non-zero, com-mutative ring we may use (2.4.(ii)) to pick up a maximal ideal m i Rof R (containing 0 ⊂ R). Then, as m is maximal, F := R/m is a field.

As we have seen in (3.31) the quotient M/mM is isomorphic to F⊕I ,by virtue of the following isomorphism

F⊕I ∼→R⊕I/mR⊕I : (bi + m) 7→ (bi) + mR⊕I

In complete analogy we find an isomorphism of F -vector-spaces be-tween F⊕J and the quotient N/mN . Now, as M ∼=m N are isomorphic(by assumption), so are the quotient modules M/mM ∼=m N/mN , byvirtue of (3.30.(vii)). And therefore we find a chain of isomorphies ofF -vector-spaces connecting

F⊕I ∼=mR⊕I/

mR⊕I∼=m

R⊕J/mR⊕J

∼=m F⊕J

403

Page 404: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Clearly isomorphic modules have the same dimension (the isomor-phism transfers bases). And as F⊕I has dimension |I| (using thecanonical basis) altogether we find

|I| = dim(F⊕I) = dim(F⊕J) = |J |

(ii) Let Φ : R/a → R/b be the isomorphism of R-modules, that we as-sumed to exist. Further let v+ b := Φ(1 + a) and u+ a := Φ−1(1 + b).Then - as Φ is R-linear - we get

Φ(uv + a) = vΦ(u+ a) = v(1 + b) = v + b = Φ(1 + a)

As Φ is invertible this implies uv + a = 1 + a and hence u + a =(v + a)−1 ∈ (R/a)∗ is invertible. Now consider any a ∈ R, then a ∈ ais equivalent to a+ a = 0 + a. And as u+ a is a unit this is equivalentto ua+ a = 0 + a. Now compute

0 + b = Φ(0 + a) = Φ(ua+ a) = aΦ(u+ a) = a(1 + b) = a+ b

That is a+ a = 0 + a is equivalent to a+ b = 0 + b. Therefore a ∈ a isequivalent to a ∈ b and this is just the equality a = b of subsets of R.

2

Proof of (3.58):

(i) It has already been shown in section 2.6 that Z is a PID. It onlyremains to verify, that P is infinite. Hence we suppose, that P isfinite, say P = p1, . . . , pn . Then we let b := p1 · · · · · pn + 1 ∈ Z andchoose an arbitrary prime divisor p ≥ 0 of b, that is b = ap for somea ∈ Z. As P encompasses all (positive) prime elements of Z there issome i ∈ 1 . . . n with p = pi. Without loss of generality we may takep = p1. Now

p1 · · · · · pn + 1 = b = ap = ap1

Hence 1 = p1(a− p2 · · · · · pn) which means that p1 is a unit of Z - anobvious contradiction to being a prime element. Hence P is not finite.

(ii) Suppose Q = lh(x1, . . . , xn) was finitely generated, where xi = ai/bi.Then we choose a prime number p ∈ P such that for any i ∈ 1 . . . nwe have p |6 bi. Such a prime exists, as by (i) there are infinitely manyprimes and all the bi are only divisible by finitley many primes. Nowsuppose 1/p = k1x1 + · · ·+ knxn for some ki ∈ Z. Let b := b1 · · · · · bnand bi := b/bi ∈ Z, then

1

p=

n∑i=1

kiaibi

=1

b

n∑i=1

kiaibi =⇒ b = pn∑i=1

kiaibi

In particular p would divide b and hence some of the bi (as p is prime) incontradiction to the choice of p. Hence there can be no finite generatingset of Q as an Z-module.

404 17 Proofs - Rings and Modules

Page 405: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

It remains to show, that the set P truly generates Q. To do thisconsider any x = a/b ∈ Q and write down a primary decomposition

b = spk(1)1 · · · · · pk(n)

n

where s = 1 or s = −1 and pi ∈ P. We will now use induction on thenumber n of distinct prime factors of b. If n = 0 then we readily havex = a/s = as ∈ Z. And if n = 1 then

x = as1

pk(1)1

⊆ Z1

pk(1)1

∈ lh(P )

For the induction step n−1→ n let us abbreviate q := pk(1)1 ·· · ··pk(n−1)

n−1

and p := pk(n)n . Then x = a/(sqp) = (as)/(qp). As p and q are

relatively prime gcd(p, q) = ±1 and Z is a PID there are some u andv ∈ Z such that uq + vp = as. Hence we find (note that u/p ∈ lh(P)as in the case n = 1 and v/q ∈ lh(P) by induction hypothesis)

x =as

qp=

uq + vp

qp=

u

p+v

q∈ lh(P) + lh(P) = lh(P)

2

Proof of (3.59):Suppose X is a generating set of M with |X| = rank(M). As ϕ : M Nis surjective, for any y ∈ N there is some x ∈ M such that y = ϕ(x).Also as X generates M we can choose some ai ∈ R and xi ∈ X such thatx = a1x1 + · · ·+ anxn. And clearly this yields

y = ϕ(x) =

n∑i=1

aiϕ(xi) ∈ lh(ϕ(X)

)As y ∈ N has been chosen arbitarily we have just seen, that N = lh(ϕ(X)).And from this we find the first inequality

rank(N) ≤ |ϕ(X)| ≤ |X| = rank(M)

For the second inequality choose some generating sets X ⊆ P of P andV ⊆ M/P of M/P such that rank(P ) = |X| and rank(M/P ) = |V |. Byconstruction of M/P any v ∈ V is of the form u+P for some u ∈M . Thusfor any v ∈ V we may pick up some u ∈ M such that v = u + P . We nowtake those u to build up a set U ⊆ M , that is we have V = u+ P | u ∈ U and |U | = |V |. We will now show, that X ∪ U generates M . To do thisconsider any x ∈ M : as x + P ∈ M/P = lh(V ) there are some ai ∈ R andui ∈ U such that

x+ P =

m∑i=1

ai(ui + P ) =

(n∑i=1

aiui

)+ P

That is x− (a1u1 + · · ·+ amum) ∈ P . From this again we find some bj ∈ Rand pj ∈ X such that x− (a1u1 + · · ·+ amum) = b1p1 + · · ·+ bnpn since X

405

Page 406: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

generates P . Altogether this yields

x =

m∑i=1

aiui +

n∑j=1

bjpj ∈ lh(U ∪X)

Again, as x ∈ M has been chosen arbitarily, this proves M = lh(X ∪ U),that is X ∪ U generates M and from this we get

rank(M) ≤ |P ∪ U | ≤ |P |+ |U | = |P |+ |V | = rank(P ) + rank(M/

P

)2

Proof of (3.62):

(i) If n = 0 then X := x1, . . . , xn degenerates to X = ∅ and as Xgenerates M this implies M = 0. Thus the only choice of an elementy1 ∈ M is y1 = 0 and this obviously is R-linear dependent. Thus wemay assume 1 ≤ n. As M is the R-linear hull of X, we can choosesome ai,j ∈ R (where i ∈ 1 . . . n and j ∈ 1 . . . n+ 1) such that

yj =

n∑i=1

ai,j xi

Using these elements we first define the Z-subalgebra S of R generatedby the ai,j . That is, we let

S := Z [ai,j | i ∈ 1 . . . n, j ∈ 1 . . . n+ 1]

Note that Z is a noetherian ring (even a PID, by (2.59)) and S is afinitely generated Z-algebra (by construction). Hence by the Hilbertbasis theorem (2.33) S is a noetherian ring again. We now take to theS-module generated by the xi

MS := lhS x1, . . . , xn

Now, as MS is a finitely generated S-module and S is a noetherianring, MS is a noetherian S-module, according to (??). And clearly

yj =n∑i=1

ai,j xi ∈ MS

We will now recursively define the elements yj [m] ∈MS for any indexj ∈ 1 . . . n + 1 and any m ∈ N. We start by taking yj [0] := yj , thenwe continue

yj [m+ 1] :=n∑i=1

ai,j yi[m]

To streamline our notation let us abbreviate zm := yn+1[m]. Nowsuppose that Y := y1, . . . , yn, yn+1 was R-linearly independent. AsS ≤r R is a subring of R, then in particular Y would be S-linearlyindependent. In an intermediate claim we will prove, that in this case

406 17 Proofs - Rings and Modules

Page 407: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

the elements

y1[m], . . . , yn[m], zm = yn+1[m], z0, . . . , zm−1

are S-linearly independent for any m ∈ N. The case m = 0 is trivial, asyj [0] = yj our assumption on the yj grants their S-linear independence.Let us use induction on m, then for the induction step we have toregard an S-linear relation of the form

n+1∑j=1

aj yj [m+ 1] +

m∑l=0

bl zl = 0

Let us insert the recursive definiton for the y(m+1)j into this relation,

then we find a relation of the form

0 =n+1∑j=1

ajyj [m+] +m∑l=0

blzl

=

n+1∑j=1

aj

n∑i=1

ai,jyi[m] +

m∑l=0

blzl

=n∑i=1

n+1∑j=1

ajai,j

yi[m] +m−1∑l=0

blzl + bmyn+1[m]

By the induction hypothesis these elements are S-linearly independent,such that we find bl = 0 for any l ∈ 1 . . .m and for any i ∈ 1 . . . n also

n+1∑j=1

ajai,j = 0

But this allows to give a linear relation between the yj , just use thedefinition of the ai,j and compute (in the same manner as before)

n+1∑j=1

aj yj =n∑i=1

n+1∑j=1

ajai,j

xi = 0

But by assumption the yj are S-linearly independent, such that wenecessarily have aj = 0 for any j ∈ 1 . . . n + 1 as well. Altogetherwe have proved the intermediate claim of linear independance. Inparticular the elements z0, z1, z2 and so on are S-linearly independent.Thus - if we define

Mm := lhS zl | l ∈ 0 . . .m

We get an ascending chainM0 ⊆ M1 ⊆ M2 ⊆ . . . of S-submodules ofMS . Yet this chain is strictly increasing: suppose we had zm+1 ∈Mm

407

Page 408: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

for some m ∈ N then there would be some s0, . . . , sm ∈ S such that

zm+1 =

m∑l=0

sl zl

which contradicts the S-linear independence of the zl and hence wehave M0 ⊂ M1 ⊂ M2 ⊂ . . . . But a strictly increasing chain withinthe noetherian module MS is impossible. The only way to solve thiscontradiction is to drop the assumption of R-linear independence ofthe yj . That is Y is R-linear dependent, as we had claimed.

(ii) Since the basis B by definition generates M as an R-module and therank of M is the minimal cardinality of a generating set, trivially

rank(M) ≤ |B|

It remains to prove the equality. Let us choose some generating setG ⊆ M of minimal cardinality, that is |G| = rank(M).

In a first step we assume that n := |G| ∈ N is finite. Suppose wehad n < |B|, then we could choose some b1, . . . , bn, bn+1 ⊆ B.But as G generates M this would mean, that b1, . . . , bn, bn+1 is R-linearly dependent, according to (i). But this contradicts the R-linearindependence of B and hence we have

rank(M) = |G| = n = |B|

In the second step, we suppose that |G| is infinite. Then necessarilyB is infinite as well - else we would have |B| < |G| = rank(M). AsG generates M , for any b ∈ B ⊆ M , we can choose a finite subsetFb ⊆ G such that b ∈ 〈Fb 〉m. Now define

F :=⋃b∈B

Fb ⊆ G

Then as B generates M and any b ∈ B is generated by Fb ⊆ F it isclear, that F also is a generating set of the entire module M :

〈F 〉m =∑b∈B〈Fb 〉m ⊇

∑b∈B

Rb = lhR(B) = M

Since any Fb has been finite, but B is infinite the cardinality of F isprecisely the cardinality of B. But as F ⊆ G this already yields theconverse inequality

|B| = |F | ≤ |G|

(iii) Choose some generating set X ⊆ M of M with |X| = rank(M).Further, as F is free, we can choose some basis B ⊆ F of F . Letus first suppose X is a finite set, say X = x1, . . . , xn . The basisB ⊆ M is an R-linearly independent set, and hence B cannot containmore than n elements, as we have seen in (i). Hence by (ii) we have

rank(F ) = |B| ≤ n = |X| = rank(M)

408 17 Proofs - Rings and Modules

Page 409: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

So let us now consider the case that X is infinite - this will requiresome slight tricks of cardinal arithmetic: As X generates M = lh(X)for any y ∈ B there is a finite set ω(y) ⊆ X such that y ∈ lh(ω(y)).Hence there is a map from B to the set Ω(X) := Z ⊆ X | #Z <∞of finite subsets of X, given by

ω : B → Ω(X) : y 7→ ω(y)

But as X is infinite we have |X| = |Ω(X)|. Now consider any finitesubset Z ∈ Ω(X), say Z = z1, . . . , zn ⊆ X. Then the fibre of Z isgiven to be ω−1(Z) = y ∈ B | ω(y) = Z . Thereby ω(y) = Z yieldsy ∈ lh(ω(y)) = lh(Z). Note that since ω−1(Z) ⊆ B is contained in thebasis B of F it is R-linearly independent. And as lh(Z) is generatedby at most n elements (i) implies, that ω−1(Z) cannot contain morethat n elements: #ω−1(Z) ≤ #Z. Thus any fibre of ω is finite. Thisnow means, that the cardinality of B cannot exceed the cardinality ofΩ(X). And altogether that is

rank(F ) = |B| ≤ |Ω(X)| = |X| = rank(M)

2

Proof of (3.60):If Φ : M ∼=m N is an isomorphism of R-modules, then in particular Φ :M N is an epimorphism. Hence by (3.59) we find that rank(N) ≤rank(M). But as the inverse Φ−1 : N M is surjective as well, we alsohave rank(M) ≤ rank(N). Altogether we have found

rank(M) = rank(N)

Conversely suppose that rank(M) = rank(N). As both M and N are freewe may choose bases A ⊆ M of M and B ⊆ N of N . As R is commutativeand non-zero we may invoke (3.62.(ii)) to see

|A| = rank(M) = rank(N) = |B|

That is there is some bijection σ : A ←→ B ⊆ N . As A is a basis of M wemay now expand this map linearly. That is we use (3.45.(ii)) to define anR-module homorphism

Φ : M → N :∑x∈A

axx 7→∑x∈A

axσ(x) =∑y∈B

aσ−1(y)y

Likewise, as B is a basis as well, we can linearly expand σ−1 : B ←→ A ⊆M , as well. Then it is immediately clear, that ΦΨ = 11M and ΨΦ = 11N .That is Φ : M → N is an isomorphism with inverse Φ−1 = Ψ. In particularwe have Φ : M ∼=m N as we had claimed.

2

Proof of (3.65):

409

Page 410: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) Of course the proof will be an application of the lemma of Zorn. So letus regard the collection of all R-lineary independent subsets L′ ⊆ Mcontaining L, formally that is

L :=L′ ⊆ M | L′ is R-lineary independent, L ⊆ L′

It is clear that L ∈ L and hence L 6= ∅ is non-empty. Also L is partiallyordered under the inclusion ⊆ . Now consider some chain (Li) ⊆ L(where i ∈ I) in L and let

L′ :=⋃i∈I

Li ⊆ M

In order to establish L′ ∈ L we have to show that L′ is R-linearlyindependent again. Thus consider any xk ∈ L′ and any ak ∈ R (wherek ∈ 1 . . . n) such that

a1x1 + · · ·+ anxn = 0

By construction of L′ for any i ∈ 1 . . . k there is some i(k) ∈ I such thatxk ∈ Li(k). Since the Li form a chain we can assume (by renumberingthe xk), that

Li(1) ⊆ Li(2) ⊆ . . . ⊆ Li(n)

In particular xk ∈ Li(n) for any k ∈ 1 . . . n. But as Li(n) is linearlyindependent (it is contained in L) the above equality implies a1 =· · · = an = 0. Hence L′ is linearly independent, as well, such thatL′ ∈ L. Hence every chain in L has an upper bound so that we mayinvoke the lemma of Zorn, to find a maximal element of L.

(ii) If x ∈ L, then x ∈ lh(L) is clear, such that we may take a = 1. Alsoif there is some a 6= 0 with ax = 0, then there is nothing to prove. Sofrom now on we assume x 6∈ L and ax = 0 =⇒ a = 0 for any a ∈ R.Then, as L is a maximal linear indepenedent set, L∪x is no longerlinear independent. That is there are some y1, . . . , yn ∈ L and some a,b1, . . . , bn ∈ R that are not identically zero, such that

ax+ b1y1 + · · ·+ bnyn = 0

Thereby ax = 0 cannot be, as else y1, . . . , yn ⊆ L would be linearlydependent. And by assumption this already implies a 6= 0. Now wesee, that ax = −(b1y1 + · · ·+ bnyn) ∈ lh(L) holds true as well.

(iii) If L is a linearly independent set of M , then clearly F := lh(L) isgenerated by L (by construction) and L is linearly independent (byassumption). That is L is a basis of F and F is free. If now R 6= 0 isa non-zero commutative ring, then lemma (3.62.(iii)) yields

|L| = rank(F ) ≤ rank(M)

(iv) We first prove that M/F where F = lh(L) is a torsion module. Hencewe consider any z = x+F ∈M/F . As L is maximal linear independent

410 17 Proofs - Rings and Modules

Page 411: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) yields some a ∈ R with a 6= 0 such that ax ∈ F . Hence

az = ax+ F = 0 + F = 0 ∈ M/F

And if F ≤m M is a free submodule of M with basis B, then B ⊆ Mclearly is linearly independent. It remains to prove the maximality ofB. Thus suppose B ⊆ X. If there is some x ∈ X with x 6∈ B, then(as M/F is a torsion module) we can choose some a ∈ R with a 6= 0such that a(x + F ) = 0 + F . As above that is ax ∈ F = lh(L). Thatis there are some b1, . . . , bn ∈ R and y1, . . . , yn ∈ B such that

ax = b1y1 + · · ·+ bnyn

Thus b1y1 + · · ·+ bnyn − ax = 0 is a linear relation with a 6= 0 withinthe set X. That is X is not linearly independent and hence B ⊆ Mis maximally linear indepenent.

(v) We will first prove that |L| = |L′|. To do this we regard the mapα : L → L′ : x 7→ axx. By definition of L′ it is clear that α issurjective. Thus it remains to show the injectivity: suppose therewere some x, y ∈ L such that x 6= y and axx = ayy. Then

axx− ayy = 0

As x, y ⊆ L is linear independent, this would imply ax = ay = 0in contradiction to the choice of ax and ay. Hence α also is injective,that is we have established the bijecton α : L ←→ L′.

It remains to prove the linear independence of L′. Suppose we aregiven some x(k) ∈ L′ and b(k) ∈ R (where k ∈ 1 . . . n) such that

0 = b(1)a(1)x(1) + · · ·+ b(n)a(n)x(n)

Hereby we denoted a(k) := ax(k) ∈ R. As x(1), . . . , x(n) ⊆ L islinear independent this implies b(k)a(k) = 0 for any k ∈ 1 . . . n. Butas R is an integral domain and a(k) 6= 0 this already is b(k) = 0 forany k ∈ 1 . . . n. That is L′ also is linearly independent.

(vi) As L is a maximal linear independent set, we have seen in (ii) that forany x ∈ K there is some ax ∈ R with ax 6= 0 and axx ∈ Q := lh(L).Let us now denote K ′ := axx | x ∈ K and P := lh(K ′). By (iv)we know, that K ′ is linearly independent again and hence P is a freesubmodule of M with basis K ′. In fact, as K ′ ⊆ Q we even haveP ⊆ Q. Now by (3.62) again we find

|K| = |K ′| = rank(P ) ≤ rank(Q) = |L|

Reversing the roles of K and L in the above argument we also see that|L| ≤ |K| and hence |K| = |L|.

(vii) According to (i) we may pick up maximal linear independent setsK ⊆ P and y + P | y ∈ L ⊆ M/P for some sufficient L ⊆ M .First of all it is clear, that K ∩ L = ∅, because if we had x ∈ K ∩ L

411

Page 412: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

then x ∈ K ⊆ P and hence x + P = 0 + P such that x+ P would be linearly dependent in contrast to x ∈ L. Hence we have|K ∪ L| = |K| + |L|. It remains to prove that K ∪ L is a maximallinear independent set in M , as we then have

free(M) = |K ∪ L| = |K|+ |L| = free(P ) + free(M/

P

)For the linear independence of K ∪ L suppose we have some xi ∈ K(where i ∈ 1 . . .m) and yj ∈ L (where j ∈ 1 . . . n) such that

z := a1x1 + · · ·+ amxm + b1y1 + · · ·+ bnyn = 0

for some ai and bj ∈ R. As xi ∈ K ⊆ P we then find that 0 +P = z + P =

∑j bjyj + P =

∑j bj(yj + P ). But as yj ∈ L we

know that the yj + P are linearly independent and hence necessarilyb1 = · · · = bn = 0. That is 0 = z =

∑i aixi, but as the xi are linearly

independent as well, this implies a1 = · · · = am = 0 once more. Hencethere only are trivial linear relations in K ∪L, that is K ∪L is linearlyindependent.

It remains to prove the maximality of K ∪ L. Hence suppose we aregiven any z ∈M with z 6∈ K∪L. As z+P ∈M/P and y + P | y ∈ L is a maximal linear independent set (ii) implies that there is somec ∈ R with c 6= 0 such that cz + P ∈ lh(y + P | y ∈ L). That isthere are some yj ∈ L and bj ∈ R (where j ∈ 1 . . . n again) such thatcz+P =

∑j bj(yj +P ) =

∑j bjyj +P . That is cz−

∑j bjyj ∈ P . And

as K ⊆ M is a maximal linear set as well (ii) once more implies thatthere is some d ∈ R with d 6= 0 such that d(cz −

∑j bjyj) ∈ lh(K).

Altogether that is

d

cz − n∑j=1

bjyj

=

m∑i=1

aixi

for some xi ∈ K and ai ∈ R (where i ∈ 1 . . .m again). Rearrangingthis we found dcz − (

∑i aixi +

∑j dbjyj) = 0. Thereby dc 6= 0, as

c 6= 0 and d 6= 0 and R is an integral domain. That is we have founda non-trivial linear relation in K ∪L∪ z which means that this setis linearly dependent. So at last K ∪L is maximal linear independent.

(viii)

2

412 17 Proofs - Rings and Modules

Page 413: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Chapter 18

Proofs - Commutative

Algebra

Proof of (2.28):The implication (a) =⇒ (b) is quite easy to see: given an ascendingchain (xk) ⊆ X (where k ∈ N), that is x0 ≤ x1 ≤ x2 ≤ . . . , let us takeA := xk | k ∈ N . By assumption A has a maximal element a∗ ∈ A. Pickup some s ∈ N such that a∗ = xs. If now i ∈ N, then (as (xk) is an ascendingchain) xs ≤ xs+i and due to the maximality of xs this already is xs+i = xs.

The implication (b) =⇒ (a) will be proved as ¬ (a) =⇒ ¬ (b), which isonly slightly more difficult: as (a) is untrue there is some non-empty subsetA ⊆ X without maximal element. Negating the statement that A containsa maximal element yields:

∀ a ∈ A ∃ b ∈ A : a < b

That is (by the axiom of choice) we may pick up a map ν : A→ A such thatν(a) := b for some b ∈ A with a < b. And as A is non-empty we may startwith some x0 ∈ A. Now recursively define xk+1 := ν(xk) = νk(x0). Thenby construction we have found an infinitely ascending chain of elementsx0 < x1 < x2 < . . . which is the negation of (a).

2

Proof of (2.27):

• (a) =⇒ (b): we have the assumption of (ACC) and need to show,that there is some maximal element a∗ ∈ A∗. That is there has to besome a∗ ∈ A such that (a∗ ⊆ a) =⇒ (a∗ = a) for any a ∈ A. Yetthis statement is logically equivalent, to saying

∃ a∗ ∈ A ∀ a ∈ A : ¬(a∗ ⊆ a) ∨ (a∗ = a)

Suppose this was not the case - that is the negation of this statementwas true. This is ∀ a∗ ∈ A ∃ a ∈ A such that (a∗ ⊆ a)∧ (a∗ 6= a). Andclearly this means nothing but

∀ a∗ ∈ A ∃ a ∈ A : a∗ ⊂ a

413

Page 414: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Thus we could start by picking up any one a1 ∈ A as a∗. And bythe above property there would be some a2 ∈ A such that a1 ⊂ a2.Continuing this way (using induction with a∗ := ak and ak+1 := a) wecould construct an infinitely ascending chain if ideals a1 ⊂ a2 ⊂ . . . ⊂ak ⊂ ak+1 ⊂ . . . in contradiction to (ACC). Hence the assumptionhas to be false and this means, that (b) holds true.

• (b) =⇒ (c): Consider any ideal b i R. Then we denote the set ofall ideals generated by a finite subset B ⊆ b by A, that is

A :=〈B 〉i | B ⊆ b, #B <∞

As 0 ∈ b we have 0 ∈ A, in particular A 6= ∅ is non-empty. Hence -by assumption (b) - there is a maximal element a∗ ∈ A∗ of A. And asa∗ ∈ A there is some finite set B = a1, . . . , ak ⊆ b generating a∗.In particular a∗ ⊆ b. Now suppose b 6= a∗, that is there is some b ∈ bsuch that b 6∈ a∗. Then we would also have

a∗ ⊂ 〈a1, . . . , ak, b 〉i ∈ A

in contradiction to the maximality of a∗. Thus a∗ equals b and inparticular we conclude that b = a∗ = 〈a1, . . . , ak 〉i is finitely generated.

• (c) =⇒ (a): consider an ascending chain a0 ⊆ a1 ⊆ . . . ⊆ ak ⊆ . . .of ideals in R. Then we define the set

b :=⋃k∈N

ak

As the ak have been a chain, we find that b i R is an ideal due to(2.4.(i)). And hence by assumption (c) there are finitely many elementsa1, . . . , an ∈ R generating b = 〈a1, . . . , an 〉i. So by construction of b forany i ∈ 1 . . . n (as ai ∈ b) there is some s(i) ∈ N such that ai ∈ as(i).Now let s := max s(1), . . . , s(n) . Then for any i ∈ 1 . . . n and anyt ≥ s we get ai ∈ as(i) ⊆ as ⊆ at. In particular

b = 〈a1, . . . , an 〉i ⊆ at ⊆ b

Of course this means b = at for any t ≥ s. And this is just anotherway of saying that the chain of the ak stabilized at level s.

By now we have proved the equivalence of (a), (b) and (c) in the definitionof noetherian rings. And trivially we also have (c) =⇒ (d). These arethe important equivalencies, it merely is a neat fact that (d) also implies(c). Though we will not use it we wish to present a proof (note that this issubstantially more complicated than the important implications above).

• We will now prove (d) =⇒ (c) in several steps: we want to provethat any ideal of R is finitely generated. Thus let us take the set ofall those ideals of R not being finitely generated:

Z := a i R | a is not finitely generated

The idea of the proof will then be the following: suppose Z 6= ∅, then

414 18 Proofs - Commutative Algebra

Page 415: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

by an application of Zorn’s lemma, there is some maxmal elementp ∈ Z∗. In a second step we will then prove that any p ∈ Z∗ is prime.So by assumption (d) p is finitely generated. But because of p ∈ Z italso is not finitely generated. This can only mean Z = ∅ and henceany ideal of R is finitely generated. Thus it remains to prove:

• Z 6= ∅ =⇒ Z∗ 6= ∅: thus consider a chain (ai) (where i ∈ I) in Z.Then we denote the union of this chain by

b =⋃i∈I

ai

By (2.4.(i)) b i R is an ideal of R again. Now suppose b 6∈ Z, thenb would be finitely generated, say b = 〈b, . . . , bn 〉i. By constructionof b for any k ∈ 1 . . . n (as bk ∈ b) there is some i(k) ∈ I such thatbk ∈ ai(k). And as the ai form a chain we may choose i ∈ I suchthat ai := max ai(1), . . . , ai(n) . Thereby b = 〈b1, . . . , bn 〉i ⊆ ai ⊆ b.That is ai = 〈b1, . . . , bn 〉i would be finitely generated in contradictionto ai ∈ Z. Thus we have b ∈ Z again and hence b is an upper boundof (ai). Now - as any chain has an upper bound, by the lemma of Zorn- there is some maximal element p ∈ Z∗.

• p ∈ Z∗ =⇒ p prime: First note thet p 6= R, as R = 1R is finitelygenerated. Now suppose p was not prime, then there would be some f ,g ∈ R such that f 6∈ p, g 6∈ p but fg ∈ p. Now let a := p+fR, as f 6∈ pwe find p ⊂ a. And as p is a maximal element of Z this means thata is finitely generated, say a = 〈a1, . . . , ak 〉i. As the ai ∈ a = p + fRthere are some pi ∈ p and bi ∈ R such that ai = pi + fbi. Therefore

a = 〈p1, . . . , pk, f 〉i = p1R+ · · ·+ pkR+ fR

The inclusion ” ⊇ ” is clear, as pi ∈ p ⊆ a and f ∈ a. Converselyconsider x ∈ a, that is there are some xi ∈ R such that x =

∑i xiai =∑

i xipi + f∑

i xibi ∈ p1R+ · · ·+ pkR+ fR. Next we note that

p = f(p : f) + p1R+ · · ·+ pkR

where f(p : f) = (fR) (p : f) = fb | b ∈ p : f i R. The inclusion” ⊇ ” is easy: pi ∈ p is true by definition. Thus consider any elementx ∈ f(p : f). That is x = fb for some b ∈ p : f = b ∈ R | fb ∈ p .In particular x = bf ∈ p, too. For the converse inclusion we are givenany q ∈ p ⊆ a. That is there are some xi ∈ R and y ∈ R such thatq = x1p1 + · · ·+ xkpk + yf . As q and all the pi are contained in p thisimplies yf ∈ p and thereby y ∈ p : f . Thus yf ∈ f(p : f) and henceq ∈ p1R+ · · ·+ pkR+ f(p : f). Finally we prove

p : a 6∈ Z

Clearly we get p ⊆ p : a and b ∈ p : a (because of ab ∈ p). But b 6∈ pand hence p ⊂ p : a. But by the maximality of p in Z this meansp : a 6∈ Z. Thus p : a is finitely generated and hence a(p : a) is finitelygenerated, too. But as the sum of finitely generated ideals is finitely

415

Page 416: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

generated by (1.54) this means that p = a(p : a) + p1R + · · · + pk isfinitely generated, too, in contradiction to p ∈ Z. Thus the assumptionof p not being prime is false.

2

Proof of (2.31):

(i) We will only prove the noetherian case - the arinian case is an anal-ogous argument involving descending (instead of ascending) chains ofideals. Thus consider an ascending chain of ideals in R/a

u0 ⊆ u1 ⊆ . . . ⊆ uk ⊆ . . . iR/

a

By the correspondence theorem (1.61) the ideals uk are of the formuk = bk/a for sufficient ideals bk := %−1(uk) i R. Now suppose b ∈ bkthen b+ a ∈ uk ⊆ uk+1 = bk+1/a and hence b ∈ bk+1 again. Thus wehave found an ascending chain of ideals in R

b0 ⊆ b1 ⊆ . . . ⊆ bk ⊆ . . . i R

As R is noetherian this chain has to be eventually constant, that isthere is some s ∈ N such that for any i ∈ N we get bs+i = bs. But thisclearly implies us+i = us as well and hence R/a is noetherian again.

(ii) If ϕ : R S is a surjective homomorphism, then by the first isomor-phism theorem (1.77) we get R/kn(ϕ) ∼=r S. But by assumption R is? and hence R/kn(ϕ) is ?, too, due to (i). And because of this isomor-phy we find that S thereby is ?, as well (this is clear by transfering achain of ideals from S to R/kn(ϕ) and returning to S).

(iii) First suppose that R⊕S is ?. Trivially we have the following surjectivering homomorphisms R ⊕ S R : (a, b) 7→ a and R ⊕ S R :(a, b) 7→ b. Thus both R and S are ? due to (ii). Conversely supposeboth R and S are noetherian. We will prove that this implies R ⊕ Sto be noetherian again (the proof in the artinian case is completelyanalogous). Thus consider an ascending chain of ideals of R⊕ S

u0 ⊆ u1 ⊆ . . . ⊆ uk ⊆ . . . i R⊕ S

Clearly R⊕ 0 and 0⊕ S i R⊕ S are ideals of R⊕ S, too. Thus weobtain ideals ak := uk ∩ (R ⊕ 0) and bk := uk ∩ (0 ⊕ S) i R ⊕ S beintersection. And for these we get

uk = ak + bk

The inclusion ” ⊇ ” is clear, as ak, bk ⊆ uk. For the converse inclusion”⊆ ” we are given some (a, b) ∈ uk. As uk is an ideal we find (a, 0) =(1, 0)(a, b) and (0, b) = (0, 1)(a, b) ∈ uk. And as also (a, 0) ∈ R ⊕ 0and (0, b) ∈ 0 ⊕ S this yields (a, b) = (a, 0) + (0, b) ∈ ak + bk. Butas ak is an ideal of R ⊕ S contained in R ⊕ 0 it in particular is anideal of ak i R ⊕ 0. But clearly R and R ⊕ 0 are isomorphic underR ∼=r R ⊕ 0 : a 7→ (a, 0). Thus ak corresponds to the following

416 18 Proofs - Commutative Algebra

Page 417: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

ideal ak := a ∈ R | (a, 0) ∈ ak i R. And thus we have found anascending chain of ideals in R

a0 ⊆ a1 ⊆ . . . ⊆ ak ⊆ . . . i R

Yet as R was assumed to be noetherian there is some p ∈ N suchthat for any i ∈ N we get ap+i = ap. And returning to R ⊕ 0 (viaak = (a, 0) | a ∈ ak ) we find that ap+i = ap as well. With the sameargument for the bk we find some q ∈ N such that for any i ∈ N weget bq+i = bq. Now let s := max p, q then it is clear that for anyi ∈ N we get

us+i = as+i + bs+i = as + bs = us

that is the chain of the uk has been eventually constant. And thismeans nothin but R⊕ S being noetherian again.

2

Proof of (2.33):This proof requires some working knowledge of polynomials as it is presentedin sections 7.3 and 7.4. Yet as the techniques used in this proof are fairlyeasy to see we chose to place the proof here already. In case you encounterproblems with the arguements herein refer to these sections first.

(1) In a first step let us consider the case where S = R[t] is the polynomialring over R. Thus consider an ideal u i S, then we want to verifythat u is finitely generated. The case u = 0 is clear, thus we assumeu 6= 0. Then for any k ∈ N we denote

ak := lc (f) | f ∈ u, deg(f) = k ∪ 0

where lc (f) := f [deg(f)] denotes the leading coefficient of f . We willfirst prove that ak iR is an ideal of R: 0 ∈ ak is clear by construction.Thus suppose a, b ∈ ak say a = lc (f) and b = lc (g). As both f and gare of degree k we get (f + g)[k] = f [k] + g[k] = lc (f) + lc (g) = a+ b.Thus if a+b = 0 then a+b ∈ ak is clear. And else, if a+b 6= 0 then f+gis of degree k, too and lc (f +g) = (f +g)[k] = a+ b. And as f +g ∈ uagain we find a+ b ∈ ak in both cases. Now let r ∈ R be any element,if ra = 0 then ra ∈ ak is trivial again. And else rf : k 7→ rf [k] is ofdegree k again and satisfies lc (rf) = (rf)[k] = r(f [k]) = rlc (f) = ra.And as also rf ∈ u we again found ra ∈ ak in both cases. Altogetherak iR is an ideal. Next we will also prove the containment ak ⊆ ak+1.That is we consider some a = lc (f) ∈ ak again. As f ∈ u we also havetf : k 7→ f [k − 1] ∈ u. Obviously deg(tf) = deg(f) + 1 = k + 1 andlc (tf) = f [k + 1] = f [k] = a. Thus we have found a = lc (tf) ∈ ak+1.Altogether we have found the following ascending chain of ideals of R

a0 ⊆ a1 ⊆ . . . ⊆ ak ⊆ . . . i R

• As R is noetherian this chain has to be eventually constant, that isthere is some s ∈ N such that for any i ∈ N we get as+i = as. And

417

Page 418: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

furthermore any ak is finitely generated (as R is noetherian). As wemay choose generators of ak we do so

ak = 〈ak,1, . . . , ak,n(k) 〉i

and pick up polynomials fk,i ∈ u of degree deg(fk,i) = k such thatlc (fk,i) = ak,i. Then we define the following ideal of S

w := 〈fk,1 | k ∈ 0 . . . s, i ∈ 1 . . . n(k) 〉i

• For the first step it remains to prove that u =scriptw. Then u is finitely generated, as we have given a list of gener-ators explictly. The inclusion w ⊆ u is clear, as any fi,k ∈ u. For theconverse inclusion we start with some f ∈ u and need to show f ∈ w.This will be done by induction on the degree k := deg(f) of f . Thecase f = 0 is clear. If deg(f) = k = 0 then f ∈ R is constant, thatis f = lc (f) ∈ a0 = 〈a0,1, . . . , a0,n(0) 〉i ⊆ w. Thus for the inductionstep we suppose k ≥ 1 and let a := lc (f) = f [k]. If k ≤ s thena ∈ ak = 〈ak,1, . . . , ak,n(k) 〉i. That is there are some bi ∈ R such thata =

∑i ak,ibi. Now let us define the polynomial

g :=

n(k)∑i=1

bifk,i ∈ w

Then it is clear that lc (g) =∑

i bilc (fi,k) = a = lc (f) and hencedeg(f−g) < k. Thus by the induction hypothesis we get d := f−g ∈ wagain and hence f = d+g ∈ w, too. It remains to check the case k > s.Yet this can be dealt with using a similar argument. As k ≥ s we havea ∈ ak = as. That is a =

∑i ak,ibi for sufficient bi ∈ R. This time

g :=

n(s)∑i=1

bitk−sfk,i ∈ w

Then lc (g) =∑

i bilc (fi,k) = a = lc (f) again and as before this isdeg(f − g) < k. By induction hypothesis again d := f − g ∈ w andhence f = d+ g ∈ w. Thus we have finished the case S = R[t].

(2) As a second case let us regard S = R[t1, . . . , n] the polynomial ring infinitely many variables. Again we want to prove that S in noetherian,and this time we use induction on the number n of variables. The casen = 1 has just been treated in (1). Thus consider R[t1, . . . , tn, tn+1].Then we use the isomorphy

R[t1, . . . , tn, tn+1] ∼=r R[t1, . . . , tn][tn+1]

∑α

f [α]tα 7→∞∑i=0

∑αn+1=i

f [α]tα11 . . . tαn

n

tin+1

By induction hypothesis R[t1, . . . , tn] is noetherian already. And by(1) this implies R[t1, . . . , tn][tn+1] to be noetherian, too. And using the

418 18 Proofs - Commutative Algebra

Page 419: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

above isomorphy we finally find that R[t1, . . . , tn, tn+1] is noetherian.

(3) We now easily derive the the general claim: let S be finitely generatedover R. That is there are finitely many e1, . . . , en ∈ S such thatS = R[e1, . . . , en]. Then we trivially have a surjective homomoprphismR[t1, . . . , tn] S : f 7→ f(e1, . . . , en) form the polynomial ring ontoS. And by (2.31.(iii)) this implies that S is noetherian, too.

2

Proof of (2.38):Let us denote the prime ring of R by P := 〈∅ 〉r. This is the image of theuniquely determined ring-homomorphism from Z to R (induced by 1 7→ 1).That is P = im(Z → R) and in particular P is noetherian by (2.31.(ii)).Let us now denote the set of finite, non-empty subsets of R by I

I := i ⊆ R | i 6= ∅, #i <∞

And for any i ∈ I let us denote Ri := P [i] the P -subalgebra of R gener-ated by i. Then by the Hilbert’s basis theorem Ri is noetherian as well.Given any a ∈ R we clearly have i := a ∈ I and a ∈ Ri. In particu-lar the Ri cover R. And given any two i, j ∈ I we have k := i ∪ j ∈ I,too. We now claim that Ri ∪ Rj ⊆ Ri: thus suppose i = i1, . . . , inand let a = f(i1, . . . , in) for some polynomial f ∈ P [t1, . . . , tn]. Thenit is clear that a = g(i1, . . . , in, j1, . . . , jm) ∈ P [i ∪ j] = Rk if we onlylet g(t1, . . . , tn, tn+1, . . . , tn+m) := f(t1, . . . , tn) ∈ P [t1, . . . , tn+m]. Thus wehave seen Ri ⊆ Rk and Rj ⊆ Rk can be proved in complete analogy.

2

Proof of (2.36):

(i) This has been proved in (2.27) as implication (a) =⇒ (b) already.The reader is asked to refer to page 413 for the proof.

(iii) Just let P :=b ∈ specR | p ⊆ b ⊂ q

. Then P 6= ∅ is non-empty,

as p ∈ P. But as we have seen (i) this already implies that there issome maximal element q ∈ P∗.

(iv) For any ideal R 6= a i R let us denote V(a) := p ∈ specR | a ⊆ p and n(a) := #V(a)∗ ∈ N∪∞. That is n(a) is the number of minimalprime ideals lying over a. By (2.14.(iii)) we know 1 ≤ n(a). Now let

A := a i R | n(a) =∞

be the set of all ideals having infinitely many minimal primes lyingover it. We want to show that A = ∅. Hence assume A 6= ∅, then - by(i), as R is noetherian - we may choose some maximal ideal u ∈ A∗ inA. Clearly u cannot be prime: if it was then u would be contained inprecisely one minimal prime - namely u itself, that is V(u)∗ = u. Inparticular we had n(u) = 1 <∞ which contradicts u ∈ A. Now, as u

419

Page 420: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

is not prime, there are some a, b ∈ R such that ab ∈ u but a, b 6∈ u.Then we define the respective ideals

a := u + aR and b := u + bR

As a 6∈ u we have u ⊂ a and hence a 6∈ A due to the maximality of u.Likewise b 6∈ A and this means that n(a), n(b) <∞ are finite. But wewill now prove

V(u)∗ ⊆ V(a)∗ ∪V(b)∗

To see this let p ∈ V(u)∗ be a minimal prime ideal over u. If a ∈ pthen a ⊆ p. Else suppose a 6∈ p then ab ∈ u ⊆ p implies b ∈ p, asp is prime. And thereby we get b ⊆ p. In any case p lies over oneof the ideals a or b, say a ⊆ p (w.l.o.g.). If now q ∈ V(a) then inparticular we get q ∈ V(u). Thus if q ⊆ p then the minimality of pover u implies q = p. That is p is a minimal element of V(a) and hencethe inclusion. As both V(a)∗ and V(b)∗ < ∞ are finite, V(u)∗ < ∞would have to be finite, too. But this contradicts u ∈ A again, so theassumption A 6= ∅ has to be abandoned. That is any ideal a of R onlyhas finitely many prime ideals lying over it.

(ii) Consider a set of prime ideals P 6= ∅ of the noetherian ring R. It hasalready been shown in (2.27) that P∗ 6= ∅ contains maximal elements.We will now prove that P∗ 6= ∅ contains a minimal element, as well.Suppose this was untrue, then we start with any p = p0 ∈ P. As p0

is not minimal in P, there is some p1 such that p0 ⊃ p1. Again p1

is not minimal in P, such that we may choose some p2 ∈ P such thatp1 ⊃ p2. Continueing that way we find an infinite, strictly descendingchain of prime ideals

p = p0 ⊃ p1 ⊃ p2 ⊃ . . . ⊃ pk ⊃ . . .

This means that p is a prime ideal of infinite height height(p) = ∞.But in a noetherian ring we have Krull’s Principal Ideal Theorem(??). And this states that height(p) ≤ rank(p) <∞. Hereby the rankis finite, as any ideal p in a noetherian ring is finitely generated.

2

Proof of (2.39):We start with the equivalencies for a | b. Hereby (a) =⇒ (c) is clear:b ∈ aR = ah | h ∈ R yields b = ah for some h ∈ R. (c) =⇒ (b):consider f ∈ bR, that is f = bg for some g ∈ R. But as b = ah this yieldsf = ahg ∈ aR. (b) =⇒ (a): as b ∈ bR ⊆ aR we have b ∈ aR.

We also have to check the equivalencies for a ≈ b - recall that now R isan integral domain. (a) =⇒ (b): if aR = bR then in particular aR ⊆ bRand hence b | a. Likewise we get a | b. (b) =⇒ (c): Suppose b = αaand a = βb. If b = 0 then a = βb = β · 0 = 0, as well. And hence we maychoose α := 1. Thus we continue with b 6= 0. Then b = αa = αβb. And asR is an integral domain and b 6= 0 we may divide by b and thereby obtain1 = αβ. That is α ∈ R∗ and b = αa as claimed. (c) =⇒ (a): As b = αa weget bR ⊆ aR. And from a = α−1b we get aR ⊆ bR.

420 18 Proofs - Commutative Algebra

Page 421: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

2

Proof of (2.40):

(i) By definition of 1 we have 1 · b = b and hence 1 | b, likewise we havea ·0 = 0 and hence a | 0 by (1.37). Now ah = ab for h := b and hencea | ab is trivial again.

(i) Next suppose b = 0, then b = 0 · 1 and hence 0 | b. Converselysuppose 0 | b, that is there is some h ∈ R such that 0 = 0 · h = b.Then we have just proved 0 = b, as well. If a ∈ R∗ then 1 = aa−1 andhence a | 1. Conversely suppose a | 1, that is there is some h ∈ Rsuch that ah = 1. As R is commutative this also is ha = 1 and hencea ∈ R∗. Finally let a | b, that is ah = b for some h ∈ R again. Inparticular this yields (ac)h = (ah)c = bc and hence ac | bc.

(ii) Now suppose u ∈ R is a non-zero divisor, by (i) we only have to showthe implication ” ⇐= ”. Thus let (au)h = bu for some h ∈ R, then(ah − b)u = (au)h − bu = 0. But as u is a non-zero divisor this canonly happen if ah− b = 0 and this already is a | b.

(iii) First of all a | a is clear by a · 1 = a. If now a | b and b | c (sayag = b and bh = c) then we get a | c from a(gh) = (ag)h = bh = c.And that a | b and b | a implies a ≈ b is true by definition (2.39).

(iv) As we have seen in (2.39) a ≈ b is equivalent, to aR = bR. And fromthis it is clear that ≈ is reflexive, symmetric and transitive (i.e. anequivalence relation). Thus it only remains to observe that

[a] = b ∈ R | a ≈ b = αa | α ∈ R∗ = aR∗

2

Proof of (2.48):

(i) The statement is clear by induction on the number k of elements in-volved: if k = 1 then p | a1 is just the assumption. Thus considerk + 1 elements and let a := a1 . . . ak and b := ak+1. Then by assump-tion we have p | ab and (as p is prime) hence p | a or p | b. If p | bthen we are done (with i = k + 1). And if p | a then by inductionhypothesis we find some i ∈ 1 . . . k such that p | ai. Altogether p | aifor some i ∈ 1 . . . k + 1.

(ii) If p ∈ R is prime then p 6= 0 by definition. And as also p 6∈ R∗ wehave pR 6= R, as well. Now ab ∈ pR translates into p | ab and hencep | a or p | b. But this again is a ∈ pR or b ∈ pR. That is we haveproved that pR is prime. Conversely suppose p 6= 0 and that pR is aprime ideal. As pR 6= R we have p 6∈ R∗. And as we also assumedp 6= 0 this means p ∈ R•. If we are now given a, b ∈ R with p | ab,then ab ∈ pR and hence a ∈ pR or b ∈ pR, as pR is prime. Thus wefind p | a or p | b again, and altogether p is prime.

421

Page 422: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iii) ”p irreducible =⇒ αp irreducible”: suppose αp = ab for some a,b ∈ R. As α is a unit this yields p = (α−1a)b and as p is irreduciblehence α−1a ∈ R∗ or b ∈ R∗. But α−1a ∈ R∗ already implies a ∈ R∗(with inverse a−1 = α−1(α−1a)−1). Conversely if αp is irreduciblethen p = α−1αp is irreducible, too by what we have just proved.

(iii) ”p prime =⇒ αp prime”: suppose αp | ab for some a, b ∈ R. Asα is a unit this yields p | (α−1a)b and as p is prime we hence getp | α−1a or p | b. From p | α−1a we get αp | a and p | b impliesp | α−1p which also implies αp | b. Thus we have proved, that αp isprime too. Conversely if αp is prime then p = α−1αp is prime, too bywhat we have just proved.

(iv) Let f := ab, then it is clear that a | f and hence fR ⊆ aR. And asboth a and b 6= 0 are non-zero, so is f (as R is an integral domain).Now suppose we had a ∈ fR, that is a = fg for some g ∈ R. Then wecompute a = fg = abg. And as a 6= 0 this means 1 = bg. In particularb ∈ R∗ is a unit, in contradiction to b ∈ R•. Hence a ∈ aR \ fR andthis yields the claim fR ⊂ aR.

(v) Now let R be an integral domain, p ∈ R be prime and suppose p = abfor some a, b ∈ R. In particular p | ab and hence p | a or p | b.W.l.o.g. let us assume p | b, that ph = b for some h ∈ R. Thenp = ab = aph and hence p(1 − ah) = 0. Now as p 6= 0 this impliesah = 1, that is a ∈ R∗ is a unit or R. Thereby we have verified, thatp truly is irreducible.

(vi) Clearly we have R∗ ⊆ D, as k = 0 is allowed. And if c = αp1 . . . pkand d = βq1 . . . ql ∈ D it is clear that cd = (αβ)(p1 . . . pkq1 . . . ql) ∈ Dagain. Thus it remains to show the implication cd ∈ D =⇒ c ∈ D(due to the symmetry of the statement it also follows that d ∈ D).Thus consider any c, d ∈ R such that cd = αp1 . . . pk ∈ D. We willprove the claim by induction on k - that is we will verify

∀ c, d ∈ R : cd = αp1 . . . pk ∈ D =⇒ c ∈ D

for any k ∈ N. The case k = 0 is trivial, since cd = α ∈ R∗ impliesc ∈ R∗ ⊆ D. Thus regard k ≥ 1. We let I := i ∈ 1 . . . k | pi | d.(1) If there is some i ∈ I then pi | b. W.l.o.g. we may assume i = k.That is there is some h ∈ R such that d = hpk. Then

αp1 . . . pk = cd = chpk

As R is an integral domain and pk 6= 0 we may divide by pk to findch = αp1 . . . pk−1 ∈ D. By induction hypothesis that is c ∈ D already.(2) If I = ∅ then for any i ∈ 1 . . . k we’ve got pi |6 d. But as pk is primeand pk | cd this yields pk | c, say c = gpk. Then

αp1 . . . pk = cd = gdpk

We divide by pk again to find gd = αp1 . . . pk−1 ∈ D. And by inductionhypothesis this means g ∈ D. But pk ∈ D is clear, as well and thereforec = gpk ∈ D as claimed.

422 18 Proofs - Commutative Algebra

Page 423: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(vii) First consider the case α = 1 = β. Without loss of generality we mayassume k ≤ l and we will use induction k to prove the statement. Ifk = 0 we have to prove l = 0. Thus assume l ≥ 1 then 1 = q1 . . . ql =q1(q2 . . . ql). That is q1 ∈ R∗ is a unit. But q1 has been prime andhence q1 6∈ R∗ a contradiction. Thus we necessarily have k = l = 0.Now consider the induction step k ≥ 1, then pk | p1 . . . pk = q1 . . . qland as pk is prime this means pk | qj for some j ∈ 1 . . . l. Withoutloss of generality we may assume j = l. Hence we found ql = γpk forsome γ ∈ R. But ql has been prime and hence is irreducible by (iv).This means γ ∈ R∗ (as pk 6∈ R∗) and hence pk ≈ qk. Further we have

(p1 . . . pk−1)pk = p1 . . . pk = q1 . . . ql = γ(q1 . . . ql−1)pk

As pk 6= 0 and R is an integral domain we may divide by pk andthereby get p1 . . . pk−1 = (γq1)q2 . . . ql−1. (As γq1 is prime again) theinduction hypothesis now yields that k − 1 = l − 1 and there is someσ ∈ Sk−1 such that for any i ∈ 1 . . . k − 1 we have pi ≈ qσ(i). But thisalso is k = l and we may extend σ to Sk by letting σ(k) := k = l. (Asq1 ≈ γq1) we hence found pi ≈ qσ(i) for any i ∈ 1 . . . k. Thus we havecompleted the case α = 1 = β. If now α and β ∈ R∗ are arbitrary,then we let p′1 := αp1, p′i := pi (for i ∈ 2 . . . k) and q′1 := βq1, q′j := qj(for j ∈ 2 . . . l). By (iii) p′i and q′j are prime again and by construcitonit is clear that p′i ≈ pi and q′j ≈ qj for any i ∈ 1 . . . k and j ∈ 1 . . . l.As p′1 . . . p

′k = q′1 . . . q

′l we may invoke the first case to find k = l and

the required permutation σ ∈ Sk. But now for any i ∈ 1 . . . k we getpi ≈ p′1 ≈ q′σ(i) ≈ qσ(i) and hence pi ≈ qσ(i) as claimed.

(viii) We only prove the equivalence for the least common multiple (the likestatement for the greatest common divisor can be proved in completeanalogy). Clearly m ∈ mR∗ = lcm(A) implies m ∈ lcm(A). Thus letus assume m ∈ lcm(A), then we need to prove lcm(A) = mR∗. For” ⊇ ” we are given any α ∈ R∗. As A | m it is clear that A | αm,as well. And if A | n then m | n (by assumption) and hence alsoαm | n (by (αm)(α−1h) = n for mh = n). Concersely for ”⊆ ”consider n ∈ lcm(A). As A | m and n ∈ lcm(A) we get n | m.Analogously A | n and m ∈ lcm(A) impliy m | n. Together that ism ≈ n and hence n = αn for some α ∈ R∗.

2

Proof of (2.49):

(i) Suppose 〈a | b〉 = ∞ that is for any k ∈ N there is some hk ∈ Rsuch that akhk = b. First suppose there was some k ∈ N such thathk = 0. Then b = ak · 0 = 0, too and then a | 0 also implies a = 0,a contradiction. Thus hk 6= 0 for any k ∈ N. Now for any k ∈ N wefind hk+1a

k+1 = b = hkak. And as a 6= 0 and R is an integral domain

hk+1ak+1 = hka

k also yields hk+1a = hk. That is hk+1 | hk andhence we have found the ascending chain of ideals

h0R ⊆ h1R ⊆ . . . ⊆ hkR ⊆ hk+1R ⊆ . . .

423

Page 424: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now suppose we had hkR = hk+1R at any one stage k ∈ N. Thatis there is some α ∈ R such that hk+1 = αhk. Thereby we computehk = ahk+1 = αahk. And as hk 6= 0 this implies αa = 1. That isa ∈ R∗ a contradiction to a ∈ R•. And this only leaves 〈a | b〉 <∞.

(ii) As p is prime we have p ∈ R• and hence 〈p | a〉 < ∞ due to (i)(likewise for b and ab). That is we let k := 〈p | a〉, l := 〈p | b〉 andn := 〈p | ab〉 ∈ N. Then by assumption there are some α, β ∈ Rsuch that a = αpk and b = βpl. In particular ab = αβpk+1 such thatpk+l | ab and hence k + l ≤ n, as n is maximal among the integerswith pn | ab. Now suppose n > k + l, say pnh = ab = αβpk+l. AsR is an integral domain we may divide by pk+l and thereby obtainpn−(k+l)h = αβ. And as n− (k+ l) ≥ 1 this yields p | αβ. Now recallthat p has been prime, then we may assome (w.l.o.g.) that p | α, saypg = α. Then a = αpk = gpk+1 and hence pk+1 | a. But this is acontradiction to k being maximal and hence the assumption n > k+ lhas to be dropped. Altogether we have found n = k + l.

(♥) In a first step let us prove, that for any a ∈ R• there is an irreduciblep ∈ R such that p | a. To do this we recursively define a sequenceof elements, starting with a0 := a. Now suppose we had alreadyconstructed a0, . . . , ak ∈ R such that a0R ⊂ a1R ⊂ . . . ⊂ akR. Ofcourse ak = 0 cannot be, as 0 6= a = a0 ∈ akR. Then we iterate asfollows: if ak is not irreducible then there are f , g ∈ R• such thatak = fg. Then by (2.48.(iv)) we get akR = (fg)R ⊂ fR. Thus if wenow let ak+1 := f then we have appended our chain of ideals to

a0R ⊂ a1R ⊂ . . . ⊂ akR ⊂ ak+1R

As R is noetherian we cannot construct a strictly ascending chain ofinfinite length. That is our construction has to terminate at somefinite level s ∈ N. And this can only be if p := as is irreducible. Thuswe have a = a0 ∈ a0R ⊆ asR = pR and thereby p | a.

(iii) If a ∈ R∗ is a unit, then the claim is clear by choosing α := a andk := 0. Thus let us assume a ∈ R•. Then by (♥) above there is someirreducible element p1 ∈ R such that p1 | a, say p1a1 = a. Thussuppose we have already constructed the elements a1, . . . , ak ∈ R andp1, . . . , pk ∈ R such that the pi are irreducible, a = akp1 . . . pk andaR ⊂ a1R ⊂ . . . ⊂ akR. Suppose that ak 6∈ R∗ is no unit, then weiterate as follows: Note that ak 6= 0 as well (else a = 0) and henceak ∈ R•. Thus by (♥) there is some irreducible element pk+1 ∈ R suchthat pk+1 | ak, say ak = pk+1ak+1. We note that by (2.48.(iv)) thisimplies akR = (pk+1ak+1)R ⊂ ak+1R. So we have found ak+1 andpk+1 irreducible such that ak+1p1 . . . pk+1 = akp1 . . . pk = a and

aR ⊂ a1R ⊂ . . . ⊂ akR ⊂ ak+1R

As R is noetherian we cannot construct a strictly ascending chain ofinfinite length. That is our construction has to terminate at somefinite level s ∈ N. And this can only be if as ∈ R∗ is a unit. Thus welet α := as and thereby get a = αp1 . . . ps as claimed.

424 18 Proofs - Commutative Algebra

Page 425: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) As R is noetherian and p i R is an ideal there are finitely manyai ∈ R such that p = 〈a1, . . . , ak 〉i. Without loss of generality we mayassume ai 6= 0 (else we might just as well drop it). Then we use (iii)to find a decomposition of ai into irreducible elements pi,j ∈ R

ai = αi

n(i)∏j=1

pi,j

As ai ∈ p and p is prime this means that for any i ∈ 1 . . . k thereis some j(i) ∈ 1 . . . n(i) such that pi,j ∈ p (note that αi 6= p as else1 = α−1

i αi ∈ p). Now let pi := pi,j(i) ∈ p. Then it is clear that

ai ∈ piR ⊆ 〈p1, . . . , pk 〉i ⊆ p = 〈a1, . . . , ak 〉i

for any i ∈ . . . k. And thereby p = 〈a1, . . . , ak 〉i ⊆ 〈p1, . . . , pk 〉i aswell such that we truly find p = 〈p1, . . . , pk 〉i as claimed.

(vi) Let us prove the first statement a | b by induction over k. If k = 0then a = 1 and hence a | b is clear. Let us now fix some someabbreviations: let n(i) := 〈qi | b〉 ∈ N due to (i). And by definition

of 〈qi | b〉 it is clear that qn(i)i | b, say b = biq

n(i)i for any i ∈ 1 . . . k.

Hence in case k = 1 we are done already, as a = qn(1)1 | b. Thus we

may assume k ≥ 2. Then the induction hypothesis reads as

h :=k−1∏i=1

qn(i)i

∣∣∣ bAssume that qk | h, as qk is prime (by construction of h) this wouldmean qk | qi for some i ∈ 1 . . . k − 1. That is qi = αiqk for someαi ∈ R. But as R is an integral domain and qi is prime it also isirreducible by (2.48.(v)). This yields αi ∈ R∗ and hence qi ≈ qk. Butby assumption this is i = k in contradiction to i ∈ 1 . . . k − 1. Thuswe have seen qk |6 h, or in other words 〈qk | h〉 = 0. Now recall thath | b, that is there is some g ∈ R such that b = gh. then we mayeasily compute (using (ii))

n(k) = 〈qk | b〉 = 〈qk | gh〉 = 〈qk | g〉+ 〈qk | h〉 = 〈qk | g〉

That is qn(k)k = q

〈qk|g〉k | g such that g = q

n(k)k f for some f ∈ R.

Multiplying this identity with h we finally find a | b from

b = gh = qn(k)k fh = af

It remains to verify the second claim in (vi). That is consider anyprime element p ∈ R such that p | b = af . As p is prime this meansp | a or p | f . In the first case p | a (by construction of a) we findsome i ∈ 1 . . . k such that p | qi. That is αp = qi for some α ∈ R.Again we use that qi is prime and hence irreducible to deduce thatα ∈ R∗ and hence p ≈ qi. In the second case p | f we are donealready, as f = b/a. Thus we have proved p | (b/a) or p ≈ pi for

425

Page 426: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

some i ∈ 1 . . . k. Now suppose both statements would be true, that isp | f and (w.l.o.g.) p | qk. That is f = pr and qk = αp for somer ∈ R and α ∈ R∗. Then we compute

b = af =(hq

n(k)k

)(α−1qkr

)= q

n(k)+1k

(α−1hr

)In particular q

n(k)+1k | b. But this clearly is a contradiction to the

maximality of n(k) = 〈qk | b〉. Thus we have proved the truth of eitherp | (b/a) or p ≈ qi (and not both) as claimed.

(v) Suppose we had a ∈ pR for infinitely many p ∈ P . In particular wemay choose a countable subset (pi) ⊆ P such that a ∈ piR for anyi ∈ N. By (vi) we get

ak :=k∏i=0

p〈pi|a〉i

∣∣∣ aFrom the construction it is clear that for fk := p

〈pk|a〉k we get ak−1fk =

ak. And as pk | a we have 〈pk | a〉 ≥ 1 such that fk 6= 1. Infact fk ∈ R• as pk ∈ R• and R is an integral domain. Next let uschoose some gk ∈ R such that a = akgk. Then for any k ∈ N we getakgk = a = ak+1gk+1 = akfk+1gk+1. It is clear that ak 6= 0, as elsea = 0 · gk = 0. Thus we may divide ak from akgk = akfk+1gk+1 andthereby find gk = fk+1gk+1. Now using (2.48.(iv)) with gk+1 6= 0 andfk+1 ∈ R• we find gkR ⊂ gk+1R. As this is true for any k ∈ N wehave found an infinitely ascending chain

g0R ⊂ g1R ⊂ g2R ⊂ . . .

of ideals in R. But this is a contradiction to R being noetherian. Thusthe assumption has to be false and that is: there are only finitely manyp ∈ P with a ∈ pR.

2

Proof of (2.50):In a first step we will prove the equivalence of (a), (b) and (c). Using theseas a definition we will prove some more porperties of UFDs first, before wecontinue with the proof of the equivalencies of (d), (e) and (f) on page (469).

• (a) =⇒ (c): By (2.48.(v)) we already know that prime elements areirreducible (as R is an integral domain). From this (1) and the one partof (2) are trivial by assumption (a). We only have to prove the secondpart of (2): ∀ q ∈ R : q irreducible =⇒ q prime. By assumption(a) q admits a prime decomposition q = αp1 . . . pk. It is clear thatk 6= 0 as else q ∈ R∗. Now suppose k ≥ 2 then we would write q asq = (αq1)(q2 . . . qk). And as q is irreducible this would mean αpi ∈ R∗or p2 . . . pk ∈ R∗. But as p1 is prime, so is αp1 - by (2.48.(iii)) - andas R is an integral domain R• is closed under multiplication such thatp2 . . . pk ∈ R•. Hence none of this can be and this only leaves k = 1.That is q = αp1 and hence q is prime.

426 18 Proofs - Commutative Algebra

Page 427: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• (c) =⇒ (b): consider any 0 6= a ∈ R, property (1) in (b) and (c) areidentical so we only need to verify (2) in (b). Thus let (α, p1, . . . , pk)and (β, q1, . . . , ql) be two irreducible decompositions of a. By as-sumption (2) in (c) all the pi and pj are prime already. Thereforeαp1 . . . pk = a = βq1 . . . ql implies k = l and pi = qσ(i) (for someσ ∈ Sk and any i ∈ 1 . . . k) by (2.48.(vii)). And by definition thisalready is (α, p1, . . . , pk) ≈ (β, q1, . . . , ql).

• (b) =⇒ (a): By assumption (b) any 0 6= a ∈ R has an irreducibledecomposition. Thus it suffices to show that any irreducible elementp ∈ R already is prime. Thus consider an irreducible p ∈ R andarbitrary a, b ∈ R. Then we need to show p | ab =⇒ p | a or p | b.To do this we write down irreducible decompositions of a and b

a = αp1 . . . pk and b = βpk+1 . . . pk+l

As p | ab there is some h ∈ R with ab = ph. Again we use anirreducible decomposition of h = γq1 . . . qm. Finally let qm+1 := p

γq1 . . . qmqm+1 = ph = ab = αβp1 . . . pk+l

As all the pi and qj are irreducible assumption (2) now yields thatn := m + 1 = k + l and that there is some σ ∈ Sn such that forany i ∈ 1 . . . n we get qi ≈ pσ(i). Now let j := σ(m + 1), that isp = qm+1 ≈ pj . If j ∈ 1 . . . k then p ≈ pj | a and hence p | a.Likewise if j ∈ k + 1 . . . k + l then p ≈ pj | b and hence p | b.Together this means that p is truly prime.

2

Proof of (2.54):Until now we only have proved the equivalence of (a), (b) and (c) in thedefinition of an UFD. That is we understand by an UFD an integral domainsatisfying (a) (equivalently (b) or (c)). And using this definition only we areable to prove the theorem given:

(i) Let I := i ∈ 1 . . . k | p ≈ pi and n := #I. Then for any i ∈ I thereis some αi ∈ R∗ such that pi = αip. Then it is immediately clear that

a = βbpn where β := α∏i∈I

αi and b :=∏j 6∈I

pj

In particular pn | a and this means n ≤ 〈p | a〉. Conversely supposepm | a for some m > n, say pmh = a = βbpn. Then pm−nβ−1h = band hence p | b as p dim pm−n. By definition of b this would implyp | pj for some j 6∈ I. Say pj = γp for some γ ∈ R. But as pjis irreducible (and p 6∈ R∗ this yields γ ∈ R∗ and hence p ≈ pj incontradiction to j 6∈ I. Thus we have proved the maximality of n andhence n = 〈p | a〉.

(ii) Using (i) this boils down to an easy computation: pick up two primedecompositions a = αp1 . . . pk and b = βq1 . . . ql. For j ∈ 1 . . . l we let

427

Page 428: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

pk+j := qj . Then it is clear that ab = αβp1 . . . pk+l and thereby

〈p | ab〉 = # i ∈ 1 . . . (k + l) | p ≈ pi = # i ∈ 1 . . . k | p ≈ pi + # j ∈ 1 . . . l | p ≈ qj = 〈p | a〉+ 〈p | b〉

(iv) Existence: consider any 0 6= a ∈ R and pick up a prime decompositiona = βq1 . . . qk of a. As the qi is prime there is a uniquely determinedpi ∈ P such that qi ≈ pi, say qi = αipi for some αi ∈ R∗. Then

a = αp1 . . . pk where α := α1 . . . αk

For any p ∈ P now let n(p) := # i ∈ 1 . . . k | pi = p ∈ N. Then it isclear that only finitely many n(p) are non-zero, as they sum up to∑

p∈Pn(p) = k

That is n ∈ ⊕pN And further it is clear that a has the representation

a = αk∏i=1

pi = α∏p∈P

pn(p)

Uniqueness: Consider any a 6= 0 represented as a = α∏p p

n(p) aspresented in the theorem. Then we fix any q ∈ P, it is clear that

a = αhqn(q) where h :=∏p 6=q

pn(p)

It is clear that 〈q | qn〉 = n, as qn | qn and qn+1 | q would implyq | 1 and hence q ∈ R∗. Further it is easy to see that 〈q | αh〉 = 0.Because if q | αh then q | h (as α ∈ R∗) and hence there has tobe some p 6= q such that q | p. That would be q ≈ p due to theirreducibility of p in contradiction to p 6= q (as P was a representingsystem). Hence we may use (i) again to find

〈q | a〉 = 〈q | αhqn(q)〉 = 〈q | αh〉+ 〈q | qn(q)〉 = n(q)

In particular the exponent n(q) in this representation of a is uniquelydetermined to be 〈q | a〉. And as q has been arbitrary this means thatn is uniquely determined. And as R is an integral domain this finallyyields that α is uniquely determined, as well.

(iii) Suppose b = ah for some h ∈ R. Then for any p ∈ R prime we get〈p | b〉 = 〈p | ah〉 = 〈p | a〉 + 〈p | h〉 ≥ 〈p | a〉. Conversely suppose〈p | a〉 ≤ 〈p | b〉 for any p ∈ R prime. Pick up a pepresenting systemP and use it to write a and b in the form

a = α∏p∈P

pm(p) and b = β∏p∈P

pn(p)

428 18 Proofs - Commutative Algebra

Page 429: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

By assumption we have m(p) = 〈p | a〉 ≤ 〈p | b〉 = n(p). That isk(p) := n(p)−m(p) ≥ 0. Hence we explitly find some h ∈ R such that

b = ah where h := α−1β∏p∈P

pk(p)

(v) Let us abbreviate d :=∏p p

m(p) then we need to verify that d is agreatest common divisor of A. By definition of m(p) we have 〈p | d〉 =m(p) ≤ 〈p | a〉 for any p ∈ P and any a ∈ A. And by (iii) that is d | aand for any a ∈ A and hence d | A. Conversely consider any c ∈ Rwith c | A. In particular we have c 6= 0 as 0 6∈ A 6= ∅. And hencec | a implies 〈p | c〉 ≤ 〈p | a〉 for any p ∈ P (and a ∈ A) by (iii) again.This is 〈p | c〉 ≤ m(p) = 〈p | d〉 for any p ∈ P and hence c | d by(iii). The statement for the least common multiple can be proved incomplete analogy. We only have to note that the assumption #A <∞guarantees n ∈ ⊕pN again.

(vi) Once more we choose a representing system P of the primes of R.And for any p ∈ P let us abbreviate m(p) := min 〈p | a〉, 〈p | b〉 andlikewise n(p) := min 〈p | a〉, 〈p | b〉. Then it is a standard fact thatm(p) + n(p) = 〈p | a〉+ 〈p | b〉. By (ii) and (v) that is

〈p | ab〉 = 〈p | a〉+ 〈p | b〉 = m(p) + n(p)

= 〈p | a〉+ 〈p | b〉 = 〈p | dm〉

And as this holds true for any p ∈ P (iv) immediately yields the claim.

(vii) We commence with the notation of (vi) and choose any e ∈ gcda, b, c.Then by (v) the order of p ∈ P in c is given to be the following

〈p | e〉 = min 〈p | a〉, 〈p | b〉, 〈p | c〉 = minmin 〈p | a〉, 〈p | b〉 , 〈p | c〉 = min 〈p | d〉, 〈p | c〉 = 〈p | e′〉

And the latter is the order of any greatest common divisor of e′ ∈gcdc, d. As this is true for any p ∈ P we thereby find that e ande′ are associated. The claim for the greatest common divisors followsimmediately. The analogous argumentation also rests the case for theleast common multiples.

2

Proof of (2.55):As R is an UFD, it is an integral domain, by definition. Thus choose any0 6= x = b/a ∈ quotR that is integral over R, i.e. ∃ a1, . . . , an ∈ R with

xn + a1xn−1 + · · ·+ an = 0

Then we need to show that a | b already. But as R is an UFD we may letd := gcda, b and write a = αd and b = βd. Then α and β are relatively

429

Page 430: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

prime and a | b can be put as α ∈ R∗ (as then b = (α−1β)a). By multiplyingthe above equation with αn we find (note x = β/α)

βn + α (a1βn−1 + · · ·+ αn−1an) = 0

And hence α divides βn - but as α and β were relatively prime, so are α andβn. But together this implies that α truly is a unit of R.

2

Proof of (2.56):

(ii) We first prove the irreducibility of 2 ∈ Z[√−3]. Thus let ω := i

√3

and suppose 2 = (a + bω)(c + dω). Then by complex conjugation wefind 2 = (a− bω)(c− dω). Multiplying these two equations we get

4 = 2 · 2 = (a2 + 3b2)(c2 + 3d2) ≥ 9(bd)2

Thus if both b 6= 0 and d 6= 0 this formula ends up with the contradi-cion 4 ≥ 9(bd)2 ≥ 9. Thus without loss of generality (interchanging a,c and b, d if necessary) we may assume d = 0. Then we have

4 = (a2 + 3b2)c2

Of course c = 0 cannot be (else 4 = 0). If |c| = 1 then c + dω = ±1which of course is a unit in Z[

√−3]. Thus suppose |c| ≥ 2 then c2 ≥ 4

such that necessarily a2 + 3b2 = 1. This of course can only be if a = 1and b = 0 such that now a + bω is a unit of Z[

√−3]. Altogether 2 is

irreducible.

• On the other hand 2 | 4 = (1 + ω)(1 − ω). Now suppose we had2 | 1 + ω, say 2(a + bω) = 1 + ω. Then 2a = 1 and hence 2 =∈ Z∗which is false. Likewise we see that 2 does not divide 1 − ω and thismeans that 2 is not prime.

(iii) We first prove that p = s2+t2−1 is irreducible in R[s, t]. To do this wefix the graded lexicographic order on R[s, t], that is R[s, t] is orderedby (total) degree and by s < t. Now consider any two polynomials f ,g ∈ R[s, t] such that fg = p. Then

lt (f)lt (g) = lt (fg) = lt (p) = lt (−1 + s2 + t2) = t2

By prime decomposition of the leading coefficient in R[s, t] we see thateither lt(f) = 1, lt (f) = t or lt (f) = t2. If lt (f) = 1 then f is a unitin R[s, t], likewise if lt (f) = t2 then lt (g) = 1 and hence g is a unit inR[s, t]. thus we only have to exclude the case lt (f) = t. But in thiscase lt (g) = t as well such that

f = a+ bs+ ct

g = u+ vs+ wt

for some coefficients a, b, c, u, v and w ∈ R. From this we maycompute fg and compare its coefficients with those of p. Doing thisan easy computation yields

430 18 Proofs - Commutative Algebra

Page 431: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1) au = -1 constant term(2) av + bu = 0 s-term(3) aw + cu = 0 t-term(4) bv = 1 s2-term(5) bw + cv = 0 st-term(6) cw = 1 t2-term

In particular a 6= 0 and hence we may regard (f/a)(ag) = fg = pinstead. That is we may assume a = 1, then u = −1 by (1). And thus(2) turns into b = v and (3) becomes c = w. From this and (5) we get2bc = 0. That is b = 0 or c = 0. But b = 0 would be a contradiction to(4) and c = 0 would be a contradiction to (6). Thus lc (f) = t cannotoccur and this is the irreducibility of p.

• We now prove that R = R[s, t]/p is no UFD. To do this let us denotethe set X :=

(x, y) ∈ R2 | x2 + y2 = 1

. And thereby we define the

following multiplicatively closed subset of R

U := f + p ∈ R | ∀ (x, y) ∈ X : f(x, y) 6= 0

By construction it is clear that U is truly well defined (if f −g = q ∈ pthen for any (x, y) ∈ X we get f(x, y) = g(x, y)+q(x, y) = g(x, y)) andmultiplicatively closed (as R is an integral domain). Let us now denotethe element r := s + p ∈ R. Then we will prove that r/1 ∈ U−1R isirreducible but not prime. Thus U−1R is no UFD and hence R hasnot been an UFD by virtue of (2.110).

• Let us first prove that r/1 is not prime. To do this let us define thefollowing elements a := 2(t+ 1) + p and b := −2(t− 1) + p ∈ R. Thenab = −4(t2 − 1) + p = 4s2 + p = 4r2. In particular r | ab and hencer/1 | (ab)/1 = (a/1)(b/1). Now assume r/1 | a/1 that is there aresome h + p ∈ R and u + p ∈ U such that a/1 = (r/1)(h + p/u + p).As R[s, t] is an integral domain that is 2(t + 1)u + p = sh + p or inother words 2(t + 1)u − sh =: q ∈ p. Now regard (0, 1) ∈ X then0 = q(0, 1) = 4u(0, 1) 6= 0, a contradiction. That is r/1 |6 a/1 andlikewise we find r/1 |6 b/1 by regarding (0,−1) ∈ X.

• So it only remains to prove that r/1 is irreducible. Thus suppose thereare some f , g ∈ R[s, t] and u+ p, v + p ∈ U such that

s+ p1

=r

1=

f + pu+ p

g + pv + p

=fg + puv + p

That is suv + p = fg + p or in other words suv − fg =: q ∈ p. Asq ∈ p = pR[s, t] and p vanishes on X identically, so does q. And thismeans that for any (x, y) ∈ X we have the following identity

xu(x, y)v(x, y) = f(x, y)g(x, y)

In particular fg and suv and have the same roots. And as u, v do notvanish on X these are precisely the points (0, 1) and (0,−1) ∈ X. If fdoes not vanish on X then f + p ∈ U and hence f + p/u+ p is a unitof U−1R. Likewise if g does not vanish on X then g+p/v+p is a unit

431

Page 432: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

of U−1R. Thus in order to prove the irreducibility of r/1 it suffices tocheck that at f or g does not vanish on X.

• Thus assume this was false, that is f and g vanish on at least onepoint, say f(0, 1) = 0 and g(0,−1) = 0 (else interchange f and g). Wenow parametrisize X using the following curve

Ω : [0, 2π] X : ω 7→(

cos(ω)

sin(ω)

)First suppose neither f(0,−1) = 0 nor g(0, 1) = 0, that is f and gboth have precisely one root on X. Then f cannot change signs onX (else [0, 2π] → R : ω 7→ fΩ(ω) would have to have at least tworoots because of fΩ(0) = fΩ(2π) 6= 0). And the same is true for g.Thus (x, y) 7→ f(x, y)g(x, y) does not change sign on X. Likewise uvdoes not have a single root on X and hence its sign on X is constant.Hence (x, y) 7→ xu(x, y)v(x, y) does change sign in contradiction tothe equality of these two functions. Thus at least one of the two f org even has to have two roots on X.

• We only regard the case f(0, 1) = 0 and g(0, 1) = 0 = g(0,−1). Theother case (where f has both roots) is completely analogous. Thebasic ideas is the following: as both f and g vanish in (0, 1) this is aroot of order 2 for fg. But xuv only has a root of order 1 in (0, 1).Thus let us do same basic calculus - we have Ω(π/2) = (0, 1) and inthis point we get Ω′(π/2) = (1, 0). Thus in this case we find anothercontradiction to suv = fg by evaluating

((suv)Ω

)′(π/2) = 〈

(uv + sv∂su+ su∂sv

sv∂tu+ su∂tv

)(0

1

)|(

1

0

)〉

= u(0, 1)v(0, 1) 6= 0

((fg)Ω

)′(π/2) = 〈

(g∂sf + g∂sg

g∂tf + g∂tf

)(0

1

)|(

1

0

)〉

= 〈(0, 0) | (1, 0)〉 = 0

Proof of (2.59):

(ii) First consider a > 0, then it is clear that q := b div a ∈ Z exists, as theset q ∈ Z | aq ≤ b = ] −∞, b/a] ∩ Z contains the maximal elementq = a/b rounded down. And from the construction of r := b mod a itis clear that b = aq + r. From the definition of q it is also clear that0 ≤ r. Now assume r ≥ a then a(q+1) = aq+a ≤ aq+r = b and henceq would not have been maximal - a contradiction. Thus we also have0 ≤ r < a and hence α(r) = r < a = α(a). Thus we have establishedthe division with remainder for a > 0. If conversely a < 0 then −a > 0and hence we may let q := (−b) div (−a) and r := −((−b) div (−a)).Then we have just proved −b = q(−a)− r which yields b = qa+ r andα(r) = α(−r) < α(−a) = α(a).

(iii) Consider any g ∈ E[t], 0 6= f ∈ E[t] and let α := lc (f). Thenf := α−1f ∈ E[t] is normed. Further let g := α−1g. Then by (??)

432 18 Proofs - Commutative Algebra

Page 433: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

there are q, r ∈ E[t] such that g = qf + r. Multiplying with α we findg = qf + αr and deg(αr) = deg(r) < deg(f) (as α ∈ E∗). Thus wehave established the division with remainder on E[t].

(iv) It is straightforward to check that Z[√d] truly is a subring of C.

Clearly 0 = 0 + 0√d ∈ Z[

√d] and 1 = 1 + 0

√d ∈ Z[

√d]. Now

consider x = a+ b√d and y = f + g

√d ∈ Z[

√d]. Then it is clear that

x+ y = (a+ f) + (b+ g)√d ∈ Z[

√d]

xy = (af + dbg) + (ag + bf)√d ∈ Z[

√d]

Thus Z[√d] is a subring of C and in particular an integral domain.

Now assume x = y, then we wish to verify (a, b) = (f, g) (the converseimplication is clear). Assume we had b 6= g then we would be able todivide by b− g 6= 0 and hence get

√d =

f − ab− g

∈ Q

a contradiction to the assumption√d 6∈ Q. Thus we have b = g and

hence (substracting b√d = g

√d) also a = f . Thus x 7→ x is truly well

defined and the bijectivity is clear. We also see 0 = 0− 0√d = 0 and

1 = 1− 0√d = 1 immediately. It remains to prove that x 7→ x also is

additive and multiplicative, by computing

x+ y = (a− b√d) + (f − g

√d) = (a+ f)− (b+ g)

√d) = x+ y

xy = (a− b√d)(f − g

√d) = (af + dbg)− (ag + bf)

√d = xy

But from this it is clear that ν also is multiplicative. Just computeν(xy) = |xyxy| = |xxyy| = |xx| · |yy| = ν(x)ν(y). We finally note thatfor any 0 6= x ∈ Z[

√d] we get ν(x) 6= 0, since ν(x) = |a2 − db2| = 0 is

equivalent to a2 = db2. Thus if b = 0 then a = 0 and hence x = 0 asclaimed. And if b 6= 0 then we may divide by b to find (a/b)2 = d. Inother words

√d = a/b ∈ Q, a contradiction to the assumption

√d 6∈ Q.

Thus the case b 6= 0 cannot occur in the first place.

Next we prove that the units of Z[√d] are precisely the elements x

such that ν(x) = 1. If x ∈ Z[√d]∗ is a unit, then we have 1 = ν(1) =

ν(xx−1) = ν(x)ν(x−1) and hence ν(x) ∈ Z∗. But as also ν(x) ≥ 0 thisyields ν(x) = 1. Thus consider any x = a + b

√d ∈ Z[

√d] such that

ν(x) = 1. Then xx = a2−db2 = ±1 and hence x−1 = ±x is invertible.

Now assume that d ≤ −3 then we want to prove that 2 is an irreducibleelement of Z[

√d] that is not prime. We begin by proving that 2 is not

prime: as either d or d− 1 is even we have

2 | d(d− 1) = d2 − d = (d+√d)(d−

√d)

but we have 2 |6 d±√d, because if we assume 2(a+b

√d) = d±

√d then

(comparing the√d-coefficient) we had 2b = ±1 ∈ Z in contradiction to

2 6∈ Z∗. However 2 is irreducible, assume 2 = xy for some x, y ∈ Z[√d].

Then 4 = ν(2) = ν(xy) = ν(x)ν(y). Hence we have ν(x) ∈ 1, 2, 4. Ifν(x) = 1 then x ∈ Z[

√d]∗ is a unit. And if ν(x) = 4 then ν(y) = 1 and

433

Page 434: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

hence y ∈ Z[√d]∗ is a unit. It suffices to exclude the case ν(x) = 2,

that is assume ν(x) = 2. As d ≤ −3 we have 2 = ν(x) = a2 − db2.Thus in the case b 6= 0 we obtain 2 = a2 − db2 ≥ 0 + 3 · 1 = 3 acontradiction. And in case b = 0 we get 2 = a2 for some a ∈ Z, incontradiction to the irreducibility of 2 in Z. Thus ν(x) = 2 cannotoccur and this means that 2 ∈ Z[

√d] is irreducible.

It remains to prove that (Z[√d], ν) even is an Euclidean domain for

d ∈ −2,−1, 2, 3. We consider the elements 0 6= x = a + b√d and

y = f + g√d ∈ Z[

√d], then (as ν(y) 6= 0) we may define

p :=af − dbga2 − db2

q :=ag − bfa2 − db2

That is p + q√d := (1/(a2 − db2))yx. And therefore we obtain the

identity (p+ q√d)x = (1/(f2− dg2))yxx = y. In other words we have

found y/x to be p+q√d ∈ Q[

√d]. Now choose r, s ∈ Z such that |p−r|

and |q − s| ≤ 1/2 (which is possible, as the intervals [r − 1/2, r + 1/2](for r ∈ Z) cover Q) and define

u := r + s√d

v := y − ux

Then it is clear that u, v ∈ Z[√d] and y = ux+v. But further we find

x((p− r) + (q − s)

√d)

= x(yx− u)

= y − ux = v

Due to the multiplicativity of ν (which even remains true for coeffi-cients (here p−r and q−s) inQ ) we find ν(v) = ν(x)|(p−r)2−d(q−s)2|.But as |d| ≤ 3 we can estimate

|(p− r)2−d(q− s)2| ≤ (p− r)2 + |d|(q− s)2 ≤ (1/2)2 + 3(1/2)2 = 1

The case |(p−r)2−d(q−s)2| = 1 can only occur if |p−r| = |p−s| = 1/2and d = 3. But in this case we have |(p−r)2−d(q−s)2| = |1/4−3/4| =1/2 < 1. Thus we have even proved the strict estimate

ν(v) = ν(x)|(p− r)2 − d(q − s)2| < ν(x)

Altogether we have seen that Z[√d] is an integral domain and that

given 0 6= x, y ∈ Z[√d] we can find u, v ∈ Z[

√d] such that y = ux+ v

and ν(v) < ν(x). But this is the condition to be an Euclidean domain.

2

Proof of (2.60):

• Without loss of generality we assume ν(a) ≤ ν(b) and let f1 := b andf2 := a. That is if f and g are determined using the initialisationof the algorithm, then we get f1 = g and f2 = f . Suppose we have

434 18 Proofs - Commutative Algebra

Page 435: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

already computed f1, . . . , fk ∈ R and fk 6= 0, then in the next step thealgorithm computes qk := q and fk+1 := r ∈ R such that

fk−1 = qkfk + fk+1

and ν(fk+1) < ν(fk). Thus by induction on k we find the followingstrictly decreasing sequence of natural numbers

ν(b) = ν(f1) ≥ ν(f2) > ν(f3) > . . . > ν(fk) > ν(fk+1) > . . .

As ν(b) is finite this chain cannot decrease indefinitely but has toterminate at some level n. That is f = fn+1 = 0. And thereforefn−1 = qng where g = fn.

• We will now prove that for any 1 ≤ k ≤ n we have g | fk. For k = nwe trivially have g | g = fn. And for k = n − 1 we have fn−1 = qngby construction. Thus by induction we can assume k ≤ n − 1 andg | fn to g | fk. Then from fk−1 = qkfk + fk+1 it also is clear thatg | fk−1 completing the induction. In particular we have g | f2 = aand g | f1 = b.

• Now suppose we are given some d ∈ R with d | a = f2 and d | b = f1.Then we will prove that for any k ∈ 1 . . . n we get d | fn by inductionon k. Thus assume k ≥ 2 and d | f1 to d | fk. Then fromfk+1 = fk−1 + qkfk we also get d | fk+1 completing the induction.In particular we get d | fn = g. And hence g truly is the greatestcommon divisor of a and b.

• We will now prove the correctness of the refined Euclidean algorithm.The first part (the computation of g) is just a reformulation of thestandard Euclidean algorithm. And as the second part is a finite com-putation, it is clear that the algorithm terminates. Next we will firstprove that for any 2 ≤ k ∈ N we have fk = rkf2 + skf1. For k = 2this is trivial f2 = 1 · f2 + 0 · f1 = r2f2 + s2f1. And for k ≥ 3 we havecomputed qk ∈ R such that fk−1 = qkfk + fk+1 for some fk+1 ∈ R. Inparticular f3 = f1− q2f2 = (−q2)f2 + 1 · f1 = r3f2 + s3f1. Now we willuse induction on k. That is we assume fk−1 = rk−1f2 + sk−1f1 andfk = rkf2 + skf1. Then a computation immediately yields

rk+1f2 + sk+1f1 = (rk−1 − qkrk)f2 + (sk−1 − qksk)f1

= (rk−1f2 + sk−1f1)− qk(rkf2 + skf1)

= fk−1 − qkfk = fk+1

In particular for k = n we find g = fn = rnf2 + snf1. Thus let usdistinguish the two cases. If ν(a) ≤ ν(b) we have f1 = b, f2 = a,r = rn and s = sn. Thereby we get g = rnf2 + snf1 = ra + sb. Andconversely if ν(a) > ν(b) then f1 = a, f2 = b, r = sn and s = rn.Thereby we get g = rnf2 + snf1 = sb+ ra. Thus in any case we haveobtained r and s ∈ R such that g = ra+ sb.

2

Proof of (2.62):

435

Page 436: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) We use induction on n to prove the statement. For n = 0 and anya ∈ R0 = R∗ it is clear that a ∈ R1, as for any b ∈ R we may chooser = 0 and thereby get a(ba−1) = b such that a | b − r. Thus forn 7→ n + 1 we are given a ∈ Rn and any b ∈ R. That is we havea | b − r for some r ∈ Rn−1. But by induction hypothesis we knowRn−1 ⊆ Rn and hence r ∈ Rn. But this again means a ∈ Rn+1 (as bhas been arbitrary) which had to be proved.

(ii) We use induction on n again. For n = 0 we are given any a ∈ Rsuch that ν(a) ≤ 0. If ν(a) < 0 then ν(a) = −∞ and hence a = 0.Otherwise ν(a) = 0 and we use division with remainder to find some q,r ∈ R such that 1 = qa+r and ν(r) < ν(a) = 0. But this can only be ifν(r) = −∞ which is r = 0. Hence we have 1 = qa, that is a ∈ R∗ = R0

is a unit of R. In the induction step n 7→ n + 1 we are given somea ∈ R with ν(a) ≤ n+ 1. Given b ∈ R we use division with remainderto find q, r ∈ R such that b = qa+r and ν(r) < ν(a) ≤ n+1. Thus wehave ν(r) ≤ n and by induction hypothesis this yields r ∈ Rn ∪ 0.That is a | b− r for some r ∈ Rn ∪ 0. And as b has been arbitrarythis means a ∈ Rn+1, which had to be shown.

(iii) Clearly µ is well defined, as by assumption any 0 6= a ∈ R is containedin some Rn. We need to prove that µ truly allows division with remain-der: thus consider any 0 6= a and b ∈ R, then we need to find q, r ∈ Rsuch that b = qa + r and µ(r) < µ(a). If µ(a) = 0 then a ∈ R0 = R∗

and hence we may choose q := ba−1 and r = 0. If µ(a) ≥ 1 then welet n := µ(a)− 1 ∈ N. As a ∈ Rn+1 and b ∈ R we have a | b− r forsome r ∈ Rn ∪ 0. This again yields µ(r) ≤ n < n + 1 = µ(a) andb = qa+ r for some q ∈ R, just what we wanted.

(iii) Next we want to prove µ(a) ≤ µ(ab). If ab = 0 then (as b 6= 0 and R isan itegral domain) we have a = 0. Thereby µ(a) = −∞ = µ(ab). Thusassume ab 6= 0, and let n := µ(ab) ∈ N. If n = 0 then ab ∈ R0 = R∗

and hence a ∈ R∗ = R0 (with a−1 = b(ab)−1). Thus we likewise haveµ(a) = 0 = µ(ab). If now n ≥ 1 then for any c ∈ R there is somer ∈ Rn−1 ∪ 0 such that ab | c − r. But as a | ab this yieldsa | c− r, too and hence a ∈ Rn again. Thereby µ(a) ≤ n = µ(ab).

(iii) If a ∈ R∗ then by (iii) above we have µ(b) ≤ µ(ab) ≤ µ(a−1ab) = µ(b)and hence µ(b) = µ(ab). Conversely suppose µ(b) = µ(ab). As b 6= 0we also have ab 6= 0 (else µ(ab) = −∞ 6= µ(b)). Thus we may usedivision with remainder to find some q, r ∈ R such that b = q(ab) + rand µ(r) < µ(b). Rearranging this equation we find (1 − qa)b = r.Now suppose we had 1 − qa 6= 0, then by (iii) above again we hadµ(b) ≤ µ((1 − qa)b) = µ(r) < µ(b), a contradiction. Thus we haveqa = 1 and this means a ∈ R∗ as claimed.

(iv) If R \ 0 =⋃nRn then (R,µ) is an Euclidean domain by (iii). Con-

versely suppose (R, ν) is an Euclidean domain for some Euclidean func-tion ν. Now consider any 0 6= a ∈ R, and let n := ν(a). Then we havealready seen in (ii) that a ∈ Rn and hence we have R \ 0 =

⋃nRn.

(v) As (R, ν) is an Euclidean domain we have R \ 0 =⋃nRn by (iv).

But by (iii) this implies that (R,µ) is an Euclidean domain, too. Thus

436 18 Proofs - Commutative Algebra

Page 437: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

consider any a ∈ R, if a = 0 then µ(a) = −∞ = ν(a) by convention.Else let n := ν(a) ∈ N. Then a ∈ Rn by (ii) and hence µ(a) ≤ n = ν(a)by construction of µ.

2

Proof of (2.65):We will only prove parts (i), (ii) and (iii) here. Part (iv) requires the theoryof Dedekind domains an will be postponed until then - see page (487).

(i) Clearly 0 i R is a prime ideal, as R is an integral domain (if ab ∈ 0then ab = 0 and hence a = 0 or b = 0 such that a ∈ 0 or b ∈ 0again). And further any maximal ideal is prime, because of (2.19).Conversely consider any non-zero prime ideal 0 6= p i R. As R is aPID there is some p ∈ R such that p = pR. By (2.48.(ii)) this meansthat p is prime and due to (2.48.(v)) this implies that p is irreducible(as R is an integral domain). Now consider any ideal a such thatp ⊆ a i R. As R is a PID there is some a ∈ R with a = aR again.Now p ∈ pR ⊆ aR means that there is some b ∈ R such that p = ab.But p has been irreducible, that is a ∈ R∗ or b ∈ R∗. In the casea ∈ R∗ we have a = aR = R. And in the case b ∈ R∗ we have a ≈ band hence a = aR = pR = p. As a has been arbitrary this means thatp already is maximal.

(ii) As R is a PID every ideal a of R is generated by one element a = aR.In particular any ideal a is generated by finitely many elements. ThusR is notherian by property (c) in (2.27). We will now prove property(d) of UFDs. As R is noetherian property (1) in (2.50.(d)) is triviallysatisfied by (ACC) of noetherian rings. Hence it suffices to checkproperty (2) of (2.50.(d)). But as R is an integral domain any primeelement p ∈ R already is irreducible, due to (2.48.(ii)). This leaves

p ∈ R irreducible =⇒ p ∈ R prime

Thus suppose p | ab for some p, a and b ∈ R, say pq = ab. As R isa PID and aR + pR i R is an ideal we find some d ∈ R such thatdR = aR+ pR. That is there are r, s ∈ R such that d = ar+ ps. Andas a ∈ aR + pR = dR there is some f ∈ R with a = df . Likewise wefind some g ∈ R with p = dg. But as p is irreducible we have g ∈ R∗or d ∈ R∗. If g ∈ R∗ then we get p | a from

p(fg−1) = (g−1p)f = df = a

And if d ∈ R∗ we get p | b by the equalities below. Thus we havefound p | a or p | b which means that p is prime

p(d−1qr + d−1bs) = d−1(pqr + pbs) = d−1(abr + pbs)

= bd−1(ar + ps) = bd−1d = b

(iii) If a = 0 then a = 0R is a principal ideal (generated by 0 ∈ R). Andif a 6= 0 there is some 0 6= b ∈ b. Therefore we may choose a as givenin the claim of (iii). Now consider any b ∈ a and choose q, r ∈ R such

437

Page 438: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

that b = qa+r and ν(r) < ν(a). Then r = b−qa ∈ a (as a, b ∈ a). Buta has minimal value ν(a) among all those b ∈ a with b 6= 0. Thereforer ∈ a and ν(r) < ν(a) implies r = 0. Thus b = qa ∈ aR, and as b hasbeen arbitrary this means a ⊆ aR. And as a ∈ R we have aR ⊆ a.Together we find that a = aR is a principal ideal. And as a has beenarbitrary this means that R is a PID.

2

Proof of (2.67):

(i) As R is an UFD by (2.65.(ii)) the length `(a) ∈ N of 0 6= a ∈ R iswell defined, due to (2.50.(b)). Therefore δ : R → N is a well-definedfunction and it is clear that it satisfies properties (1) and (2). Thusconsider any 0 6= a, b ∈ R, as R is a PID we may choose some r ∈ Rsuch that rR = aR + bR. It is clear that r 6= 0, as 0 6= a ∈ rR. Nowsuppose b 6∈ aR, Then it is clear that r ∈ aR + bR and it remains toprove δ(r) < δ(a). As a ∈ aR + bR = rR there is some s ∈ R suchthat a = rs. Suppose s ∈ R∗, then a ≈ r and hence aR = rR. Butb ∈ aR + bR = rR = aR is a contradiction to b 6∈ aR. Thus s 6∈ R∗which means `(s) ≥ 1. Hence we have δ(s) ≥ 2 and as δ(r) > 0 (recallr 6= 0) this finally is

δ(r) < 2δ(r) ≤ δ(r)δ(s) = δ(rs) = δ(a)

(ii) As a 6= 0 there is some 0 6= b ∈ a. And hence we may choose a ∈ a asgiven in the claim of (ii). We now want to verify a = aR, as a ∈ a theinclusion aR ⊆ a is clear. Conversely choose any 0 6= b ∈ a (0 ∈ aRis clear). If we had b 6∈ aR, then by property (3) there would besome r ∈ aR + bR such that δ(r) < δ(a). But as a, b ∈ a we haveaR+ bR ⊆ a and hence r ∈ a. As a has been chosen to have minimalvalue δ(a) this only leaves r = 0 and hence 0 6= a ∈ aR+bR = rR = 0,a contradiction. Thus we have b ∈ aR and hence a ⊆ aR as well.

(iii) By construction of δ it is clear that δ : R → N and that δ satisfies(2). It remains to verify (3), thus consider any 0 6= a, b ∈ R then wechoose q, r ∈ R such that b = qa + r and ν(r) < ν(a). If r = 0 thenb = qa ∈ aR. And if r 6= 0 we have r = −qa + b ∈ aR + bR andδ(r) = ν(r) + 1 < ν(a) + 1 = δ(a).

2

Proof of (2.68):

(i) First suppose mR =⋂a aR, in particular mR ⊆ aR for any a ∈ R.

But this is a | m for any a ∈ A and hence A | m. Now consider anyn ∈ R such that A | n, that is a | n and hence n ∈ aR for any a ∈ A.Therfore n ∈

⋂a aR = mR and hence m | n. Thus we have proved

m ∈ lcm(A). Conversely suppose m ∈ lcm(A), then A | m and hencem ∈

⋂a aR again. In particular we have mR ⊆

⋂a aR. Now consider

any n ∈⋂a aR, then A | n again and hence m | n by assumption

on m. Thus we have n ∈ mR and as n has been arbitrary this is theconverse inclusion

⋂a aR ⊆ mR.

438 18 Proofs - Commutative Algebra

Page 439: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) Let dR =∑

a aR then for any a ∈ A we have aR ⊆ dR and henced | a. Thus we found d | A, as a has been arbitrary. Now considersome c ∈ R such that c | A. That is for any a ∈ A we get c | a andhence aR ⊆ cR. But from this we get dR =

∑a aR ⊆ cR which also

is c | d. Together we have proved d ∈ gcd(A) again.

(iii) Now assume that R is a PID, then we will also prove the converseimplication. By assumption on R there is some g ∈ R such thatgR =

∑a aR and by (ii) this means g ∈ gcd(A). Now from (2.48.(viii))

and d, g ∈ gcd(A) we get d ≈ g and hence dR = gR =∑

a aR.

(iv) For n = 1 the statement is trivial for b1 := b, thus let us assumen ≥ 2. Then we denote a := a1 . . . an and ai := a/ai ∈ R. Nowconsider d ∈ gcda1, . . . , an and suppose we had d 6∈ R∗. Then (asR is an UFD) there would be a prime element p ∈ R dividing p | d.Thus we had p | d | a1 and hence (as p is prime) there would besome i ∈ 2 . . . n such that p | ai. Likewise for i we have p | d | aiand hence p | aj for some j 6= i. But ai and aj are relativelyprime by assupmtion, a contradiction. Thus we have d ∈ R∗, thatis 1 ∈ gcda1, . . . , an. And by (iii) this means that there are someh1, . . . , hn ∈ R such that 1 = h1a1 + · · · + hnan. Now let bi := bhi,then the claim is immediate from

n∑i=1

biai

=

n∑i=1

bhiaia

=b

a

n∑i=1

hiai =b

a

2

Proof of (2.69):

• (a) =⇒ (b): consider a, b ∈ R, by assumption there are r, s ∈ R suchthat d = ra + sb ∈ gcd(a, b). By construction we have d ∈ aR + bRand hence dR ⊆ aR + bR. On the other hand we have d ∈ gcd(a, b)and hence d | a and d | b. But that is aR ⊆ dR and bR ⊆ dR suchthat aR+ bR ⊆ dR as well. Together that is dR = aR+ bR.

• (b) =⇒ (c): a being finitely generated means that there are somea1, . . . , an ∈ R such that a = a1R + · · · + anR. We will now useinduction on n to prove that a is principal. In the case n = 0 wehave a = 0 = 0R and in the case n = 1 trivially a = a1R is principal.Thus we may assume n ≥ 2. Then by induction hypothesis u :=a1R+ · · ·+ an−1R i R is principal, that is there is some u ∈ R suchthat u = uR. Now a = a1R+ · · ·+ anR = u+ anR = uR+ anR. Thusby assumption (b) there is some d ∈ R such that a = dR is principal.

• (c) =⇒ (a): consider any a, b ∈ R and let a := aR + bR. Byconstruction a is finitely generated and hence there is some d ∈ Rsuch that a = dR. Now d ∈ aR + bR means that there are some r,s ∈ R such that d = ra+ sb. And by (2.68.(ii)) dR = aR + bR yieldsthat ra+ sb = d ∈ gcd(a, b) as well.

2

439

Page 440: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (2.70):

• (a) =⇒ (b) and (c): if R is a PID then R is a noetherian UFD dueto (2.65.(ii)). And it is clear that R also is a Bezout domain (e.g. byproperty (c) in (2.69)).

• (c) =⇒ (a): consider any ideal a i R, as R is noetherian a is finitelygenerated, due to property (c) of noetherian rings (2.27). And due toporoperty (c) of Bezout domains (2.69) this means that a is principal.As a Bezout domain also is an integral domain this means that R is aPID.

• (b) =⇒ (a): consider any ideal a i R. If a = 0 then a = 0R triviallyis principal. Thus we assume a 6= 0 and choose any 0 6= a ∈ R with

`(a) = min `(b) | 0 6= b ∈ a

where `(a) is the number of prime factors of a, that is `(a) = k, wherea = p1 . . . pk for some prime elements pi ∈ R. As a ∈ a we haveaR ⊆ a. Conversely consider any 0 6= b ∈ a. As R is a Bezout domainthe ideal aR + bR is principal aR + bR = dR for some d ∈ R, dueto property (b) in (2.69). As a, b ∈ a we have d ∈ aR + bR ⊆ aand clearly d 6= 0 (as a 6= 0). Thus by construction of a we have`(a) ≤ `(d). But on the other hand we have a ∈ aR + bR = dR andhence d | a which implies `(d) ≤ `(a) by (2.54.(iii)). But from d | aand `(a) = `(d) we get a ≈ d by choosing prime decompositions of aand d as in (2.54.(iv)). Therefore we have b ∈ aR + bR = dR = aR.This proves b ∈ aR and hence a ⊆ aR, as b has been arbitrary (b = 0is clear). Altogether we have a = aR is principal and hence R is aPID, as a has been arbitrary.

2

Proof of (2.73):

• (a) =⇒ (b): consider a + a ∈ zdR/a, that is a ∈ zdRR/a, then byassumption we have a ∈

√a. That is ak ∈ a for some k ∈ N and hence

(a+ a)k = ak + a = 0 + a such that a+ a is a nilpotent of R/a.

• (b) =⇒ (c): the nil-radical of R/a is trivially included in the set ofzero-divisors of R/a due to (1.43.(ii)), which implies equality.

• (c) =⇒ (a): consider a ∈ zdRR/a, that is a + a ∈ zdR/a. Then byassumption a+a is a nilpotent of R/a, that is ak+a = (a+a)k = 0+afor some k ∈ N. But this again means ak ∈ a and hence a ∈

√a.

• (c) =⇒ (d): consider a, b ∈ R such that ab ∈ a but b 6∈ a. That is(a + a)(b + a) = ab + a = 0 + a and as b + a 6= 0 + a this means thata + a ∈ zdR/a is a zero-divisor of R/a. By assumption this meansthat a + a is a nilpotent of R/a and hence ak + a = (a + a)k = 0 + afor some k ∈ N. But this again is ak ∈ a and hence a ∈

√a.

• (d) =⇒ (b): consider a + a ∈ zdR/a that is there is some 0 + a 6=b+ a ∈ R/a such that ab+ a = (a+ a)(b+ a) = 0 + a. But this means

440 18 Proofs - Commutative Algebra

Page 441: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

ab ∈ a and as also b 6∈ a we find a ∈√a by assumption. That is ak ∈ a

for some k ∈ N and hence (a+ a)k = ak + a = 0 + a such that a+ a isa nilpotent of R/a.

• The equivalency (d) ⇐⇒ (e) is finally true by elementary logicaloperations (note that we may commute the quantifiers ∀ a and ∀ b)

(d) ⇐⇒ ∀ a, b ∈ R : ¬(

(ab ∈ a) ∧ (b 6∈ a))∨ (a ∈

√a)

⇐⇒ ∀ a, b ∈ R : (ab 6∈ a) ∨ (b ∈ a) ∨ (a ∈√a)

⇐⇒ ∀ b, a ∈ R : (ba 6∈ a) ∨ (a ∈ a) ∨ (b ∈√a)

⇐⇒ ∀ a, b ∈ R : (ab 6∈ a) ∨ (b ∈√a) ∨ (a ∈ a)

⇐⇒ ∀ a, b ∈ R : ¬(

(ab ∈ a) ∧ (b 6∈√a))∨ (a ∈ a)

⇐⇒ (e)

2

Proof of (2.75):We will only prove statements (i) to (vii) here. Statement (viii) is concernedwith localisations and requires the correspondence theorem of ideals in lo-calisations. Its proof is hence postponed until we have shown this theorem.Hence it can be found on page (470).

(i) If p i R is prime and we assume ab ∈ p for some a, b ∈ R, then a 6∈ por b ∈ p. Thus if also b 6∈ p then necessarily a ∈ p ⊆

√p. Thus p is

primary, by property (d) in (2.73).

(ii) Let p :=√a, then p is an ideal of R, due to (2.17.(ii)). Now assume

1 ∈ p, then there would be some k ∈ N such that 1 = 1k ∈ a. Thatis 1 ∈ a and hence a = R in contradiction to a being a primary ideal.Now assume ab ∈ p for some a, b ∈ R. That is there is some k ∈ Nsuch that (ab)k = akbk ∈ a. If we have bk ∈

√a = p then b ∈ p, as p

is a radical ideal. Thus assume bk 6∈√a. Then by property (e) we get

ak ∈ a and hence a ∈ p. Thus p is a prime ideal.

(ii) It remains to prove that p =√a is the uniquely determined minimal

prime ideal containing a. Thus assume a ⊆ q i R is any prime idealcontaining a. Then we have p =

√a ⊆

√q = q. Thus if q ⊆ p, then

p = q, such that p is a minimal prime ideal containing a. And if p∗is minimal then p ⊆ p∗ again and hence p∗ = p, due to minimality.Thus p also is uniquely determined.

(iii) Let m :=√a and assume that m is maximal. Now consider any a,

b ∈ R such that ab ∈ a but b 6∈ a. Then we have to prove a ∈ m (inorder to satisfy property (d) of primary ideals). Thus suppose a 6∈ m,then (because of the maximality of m) we had R = m + aR. That isthere would be some h ∈ R such that 1 − ah ∈ m. And by definitionof m this is (1− ah)k ∈ a for some k ∈ N. Now define

f := −k∑i=1

(−1)i(k

i

)(ah)i

a∈ R

441

Page 442: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Then using the binomial rule we find 1 − af = (1 − ah)k ∈ a by astraughtforward computation. That is af+a = 1+a and hence 0+a =(f+a)(0+a) = (f+a)(ab+a) = (fa+a)(b+a) = (1+a)(b+a) = b+a.That is b ∈ a in contradiction to the assumption. Thus we have a ∈ m,which we had to prove.

(iv) From the assumption we get m =√m =

√mk ⊆

√a ⊆

√m = m due

to (2.17.(vi)) (and as any maximal ideal is prime and hence radical).Thus we have

√a = m and hence a is a primary ideal by (iii).

(v) Denote a := a1∩· · ·∩ak, then we first note that according to (2.20.(v))

√a =

√√√√ k⋂i=1

ai =k⋂i=1

√ai =

k⋂i=1

p = p

Now consider any a, b ∈ R such that b 6∈ a. That is there is somei ∈ 1 . . . k such that b 6∈ ai. But as ab ∈ a ⊆ ai and ai is primary wefind a ∈

√ai = p =

√a. That is a satisfies property (d) in (2.73).

(vi) Consider any a ∈√a : u, that is there is some k ∈ N such that aku ∈ a.

As u 6∈ a and a is primary, we find ak ∈√a and hence a ∈

√a = p,

as√a is a radical ideal. That is we have proved

√a : u ⊆ p, but as

a ⊆ a : u the converse incluison is clear from p =√a ⊆

√a : u. Thus

we have√a : u = p. Now consider any a, b ∈ R such that ab ∈ a : u

but b 6∈ a : u. This means abu ∈ a but bu 6∈ a. As a is primary weagain find a ∈

√a = p =

√a : u and hence a : u is primary, as well.

(vii) First observe that the radical of a is truly given to be p, as we compute

√a =

a ∈ R

∣∣∣ ∃ k ∈ N : ak ∈ a = ϕ−1(b)

=a ∈ R

∣∣∣ ∃ k ∈ N : ϕ(a)k = ϕ(ak) ∈ p

=a ∈ R

∣∣∣ ∃ϕ(a) ∈√

b = q

= ϕ−1(q) = p

Thus consider a, b ∈ R sucht that ab ∈ a but b 6∈ a, that is ϕ(a)ϕ(b) =

ϕ(ab) ∈ b but ϕ(b) 6∈ b. As b is primary this yields ϕ(a) ∈√b = q and

hence a ∈ ϕ−1(q) =√a. That is a is primary again.

2

Proof of (2.76):

(i) Let us denote p := pR, then p is a prime ideal, as p is a prime element.Also we have pn = pnR = a and hence

√a =√pn =

√p = p. That

is a is associated to p. We will now prove, that a is primary, by usinginduction on n. For n = 1 we have a = pR = p, that is a is prime andhence primary by (2.75.(i)). Now consider any n ≥ 2, and a, b ∈ Rsuch that ab ∈ a but b 6∈

√a = p. That is there is some h ∈ R such

that ab = pnb but p |6 b. As p is prime, p | hpn = ab and p |6 b wehave p | a, that is there is some α ∈ R such that a = αp. As Ris an integral domain we get αb = (ab)/p = (pnh)/p = pn−1h. Now

442 18 Proofs - Commutative Algebra

Page 443: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

from ab ∈ pn−1R and b 6∈√pn−1R = p the induction hypothesis yields

α ∈ pn−1R and therefore a = αp ∈ pnR = a, that is a is primary.

(ii) If a = 0 then a is prime (as R is an integral domain) and hence primary(by (2.75.(i))). And if a = pnR for some prime element p ∈ R and1 ≤ n ∈ N, then a is primary, as we have seen in (i). Converselyconsider any primary ideal a i R. As R is a PID it is generated bya single element a = aR. Clearly a 6∈ R∗ (as else a = R would not beprimary). If a = 0 then a = 0 and hence there is nothing to prove.Thus assume a 6= 0, recall that R is an UFD by (2.65) and assumethere would be two non-associate prime elements p, q ∈ R such thatp | a and q | a. Then we let m := 〈a | p〉 and n := 〈a | q〉 (clearly1 ≤ m,n ∈ N by (2.54.(iv))) and thereby get pmqm | a. That is thereis some b ∈ R such that pmqnb = a and p |6 b and q |6 b. It is clear thatpm 6∈

√a as else there would be some k ∈ N such that q | a | pmk.

On the other hand we have qnb 6∈ a, as else p | a | qnb. Thiscontradicts a being a primary ideal and hence there is only one primeelement p ∈ R (up to associateness) dividing a. Thus a = αpm forsome α ∈ R∗ and this finally means a = aR = pnR.

(iii) First of all m = 〈s, t 〉i is a maximal ideal, as R/m is a field (in fact wefind R/m ∼=r E : f + m 7→ f(0, 0)). Further we have a = 〈s, t2 〉i ⊆〈s, t 〉i = m. It is even true that t 6∈ a and hence a ⊂ m (supposet = fs + gt2 for some f , g ∈ R. Then letting t = 0 we would find0 = fs and hence f = 0. But this yields 1 = gt a contradiction tot 6∈ R∗). Next we have m = 〈s2, st, t2 〉i ⊆ 〈s, t2 〉i = a. And in analogyto the above we find s 6∈ m2, altogether this means

m2 ⊂ a ⊂ m

As m is maximal (2.75.(iv)) yields that a is primary and associatedto m. Now suppose there would be some k ∈ N and p ∈ specRsuch that a = pk. Then from m2 ⊆ a = pk ⊆ m we would getm =

√m2 ⊆ p =

√pk ⊆

√m = m. That is p = m and hence

m2 ⊂ a = mk ⊂ m. If k ≤ 1 then we have mk ⊇ m a contradiction.And if k ≥ 2 then m2 ⊇ mk a contradiction, too. Thus there is noprime ideal p i R and k ∈ N with a = pk.

(iv) First of all it is clear that u = 〈u2 − st 〉i ⊆ 〈t, u 〉i. And as 〈t, u 〉i isa prime ideal of E[s, t, u] we find that p = 〈t, u 〉i/u is a prime ideal ofR, due to the corresopndence theorem (1.61). Now regard

a = 〈b, c 〉2i = 〈b2, bc, c2 〉i = 〈b2, bc, ba 〉i = (bR)〈a, b, c 〉i

Then it is clear that√a =

√p2 = p and ab = c2 ∈ a. However a (a bit

cumbersome) computation yields b 6∈ a and a 6∈ p =√a. This means

that a is not primary.

2

Proof of (??):

443

Page 444: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) Suppose p = a ∩ b for some ideals a, b i R with a 6= p and p 6= b.As p = a ∩ b ⊆ a this means p ⊂ a, that is there is some a ∈ a witha 6∈ p. Likewise there is some b ∈ b with b 6∈ p. Yet ab ∈ a∩ b = p, andas p is prime this yields a ∈ p or b ∈ p, a contradiction. Thus thereare no such a, b and this means that p is irreducible.

(ii) Consider a, b ∈ R such that ab ∈ p but b 6∈ p. We will prove a ∈√p

which is property (d) of primary ideals. Thus let b := p + bR. Thenwe regard the following ascending chain of ideals

p ⊆ p : a ⊆ p : a2 ⊆ . . . ⊆ p : ak ⊆ p : ak+1 ⊆ . . .

As R is noetherian this chain has to stabilize. That is there is somes ∈ N such that for all i ∈ N we get p : as = p : as+i. Now leta := p + asR. Then we will prove

p = a ∩ b

The inclusion ”⊆ ” is clear, as p ⊆ a and p ⊆ b by construction.Thus consider any h ∈ a ∩ b. As h ∈ b = p + bR there are some q ∈ pand g ∈ R such that h = q + bg. Therefore ah = aq + abg ∈ p as wehave ab ∈ p. And as h ∈ a = p + asR as well, there are some p ∈ pand f ∈ R such that h = p + asf . Hence ah = ap + as+1f such thatas+1f = ah − ap. Now recall that ah ∈ p, then as+1f = ah − ap ∈ pand this means f ∈ p : as+1 = p : as. Therefore we have asf ∈ p andthis finally means h = p+asf ∈ p. Thus we have established p = a∩b.However as b 6∈ p we have p ⊂ b. And hence the irreducibility of pyields p = a = p + asR. In particular as ∈ p and that is a ∈

√p which

had to be shown.

(iii) Let us denote the set of all proper ideals that do not satisfy the claim(i.e. that are no finite intersection of irreducible ideals) by

A :=a i R | a 6= R, ∃/ p1, . . . , pk i R : (1) and (2)

We have to prove A = ∅, thus assume A 6= ∅, as R is a noetherianring this would mean that A contains a maximal element a∗ ∈ A∗. Asa∗ ∈ A we in particular have that a∗ is not irreducible (else a∗ wouldbe its own irreducible decomposition). And as also a∗ 6= R this meansthat there are ideals a, b i R such that a∗ = a ∩ b and a∗ ⊂ a anda∗ ⊂ b. By maximality of a∗ this means a, b 6∈ A and hence thereare irreducible decompositions (p1, . . . , pk) of a and (q1, . . . , ql) of b.Hence

a∗ = a ∩ b =

(k⋂i=1

pi

)∩

l⋂j=1

qi

This means that a∗ admits (p1, . . . , pk, q1, . . . , ql) as an irreducible de-composition and hence a∗ 6∈ A, a contradiction. Thus we have A = ∅,that is every proper ideal of R admits an irreducible decomposition.

2

444 18 Proofs - Commutative Algebra

Page 445: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (2.81):By assumption (2) we have a =

⋂j aj for J := 1 . . . k. And hence we may

choose a subset I of minimal cardinality with this property. Now it is clearthat ≈ is an equivalency relation on I, as it belongs to the function

p : I → specR : i 7→√ai

Hence A = I/ ≈ is a well defined partition of I (the equivalency classesα ∈ A are just the fibers of p). And for any α = [i] ∈ A we may definepα :=

√ai. That is for any α ∈ A the set ai | i ∈ α is a finite collection of

primary ideals associated to pα. Hence by (2.75.(v)) aα is a primary idealassociated to pα as well. In particular (aα) (where α ∈ A) satisfies property(1) of primary decompositions. Further we have⋂

α∈Aaα =

⋂α∈A

⋂i∈α

ai =⋂i∈I

ai = a

as the α partition I and construction of I. Thus also property (2) of primarydecompositions is satisfied. And property (4) is clear from the constructionof A precisely by this relation. It remains to verify property (3). That is fixany β = [j] ∈ A and assume a =

⋂α 6=β aα. As j ∈ β we in particular find

a =⋂i∈I

ai ⊆⋂

j 6=i∈Iai ⊆

⋂β 6=α∈A

aα = a

But this would mean a =⋂i 6=j ai in contradiction to the minimality of I.

Altogether we have proved properties (1) to (4), that is (aα) (where α ∈ A)is a minimal primary decomposition of a.

2

Proof of (2.82):

(i) By definition #ass(a1, . . . , ak) = #p1, . . . , pk ≤ k. And if we supposepi = pj then by minimality of (a1, . . . , ak) we have i = j (this isproperty (4)). Hence we even find #p1, . . . , pk = k as claimed.

(ii) As a =⋂i ai and using (2.17.(vi)) and (2.20.(v)) we can easily compute

√a : u =

√√√√( k⋂i=1

ai

): u =

√√√√ k⋂i=1

(ai : u) =k⋂i=1

√ai : u

If u ∈ ai then we have ai : u = R according to (2.17.(iv)). And ifu 6∈ ai then ai : u is primary again with

√ai : u =

√ai = pi by virtue

of (2.75.(vi)). Thus we have obtained

√a : u =

k⋂i=1

R if u ∈ aipi if u 6∈ ai

Thus we may ignore any i ∈ 1 . . . k with u ∈ ai, as they do not con-

445

Page 446: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

tribute to the intersection. What remains is precisely the claim

√a : u =

⋂u6∈ai

pi

(iv) If p ∈ ass(a) then by definition of ass(a) there is some u ∈ R such thatp =√a : u. Thus by (ii) we have the identity

p =⋂u6∈ai

pi

And hence by (2.11.(ii)) there is some i ∈ 1 . . . k satisfying (u 6∈ aiand) pi ⊆ p. And as p ⊆ pi is clear this is p = pi ∈ ass(a1, . . . , ak).Conversely choose any j ∈ 1 . . . k. As (a1, . . . , ak) is minimal we havea ⊂

⋂i 6=j ai and hence there is some ui ∈ ai for any i 6= j but uj 6∈ uj .

Thus for this uj by (ii) we get⋂uj 6∈ai

pi = pj =√

a : uj ∈ ass(a)

(iii) Without loss of generality we consider p1 ∈ ass(a1, . . . , ak)∗. That isfor any i ∈ 2 . . . k we have pi ⊆/ p1. And this again means that thereis some a ∈ pi with a 6∈ p1. Now let U := R \ p1, in particular a ∈ U .Then because of a ∈ pi =

√ai there is some n ∈ N such that an ∈ ai.

And as an ∈ U , too, we get 1/1 = (an)/(an) ∈ U−1ai. Hence we getU−1ai = U−1R and this again means

R =(U−1ai

)∩R = ai : U

for any i ∈ 2 . . . k. This now can be used to prove a : U = a1 by virtueof the following computation (see (2.105) - for convenience we evengave an elementary proof of (a ∩ b) : U = (a : U) ∩ (b : U) there)

a : U =

(k⋂i=1

ai

): U =

k⋂i=1

(ai : U) = a1 : U = a1

Hereby a1 : U = a1 holds true because a1 is primary and U = R \ p1.To be precise consider a ∈ ai : U , that is there is some u ∈ U suchthat au ∈ ai. But by definition of U we have u 6∈ p1 =

√a1, and as

a1 is primary this yields a ∈ a1. Thus we have a1 : U ⊆ a1 and theconverse inclusion is trivial.

(v) If p ∈ ass(a)∗ then by virtue of (iv) we have p = pi for some i ∈ 1 . . . k.And as pi is minimal (iii) implies a : (R \ p) = ai ∈ iso(a1, . . . , ak).Conversely consider any ai ∈ iso(a1, . . . , ak). Then by definition pi ∈ass(a1, . . . , ak)∗ and by (iii) this means ai = a : (R \ p). And by (iv)we also have pi ∈ ass(a)∗ and thereby ai ∈ iso(a).

2

Proof of (2.84):

446 18 Proofs - Commutative Algebra

Page 447: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(1) By (2.78.(iii)) there are some irreducible ideals p1, . . . , pk i R suchthat a = p1 ∩ · · · ∩ pk. And by (2.78.(ii)) these irreducible idealsalready are primary. Thus (p1, . . . , pk) is a primary decomposition ofa. And by (2.81) we find that a then already has a minimal primarydecomposition.

(2) Now assume that (a1, . . . , ak) and (b1, . . . , bl) are minimal primary de-compositions of a. Then by (2.82.(iii)) we get the identity

ass(a1, . . . , ak) = ass(a) = ass(b1, . . . , bl)

(Note that by definition ass(a) and iso(a) only depend on a, not onthe decomposition). And by (2.82.(i)) this in particular implies k = l.The third identity we finally get from (2.82.(iv))

iso(a1, . . . , ak) = iso(a) = iso(b1, . . . , bl)

(3) Let us denote the minimal prime ideals over a by pi, that is we letp1, . . . , pk := p ∈ specR | a ⊆ p∗ (note that there are onlyfinitely many pi, due to (2.36.(iii))). Then by (2.20.(i)) we have

a =√a =

⋂ p ∈ specR | a ⊆ p ∗ =

k⋂i=1

pi

And as the pi are prime they in particular are primary according to(2.78.(i)). That is (p1, . . . , pk) is a primary decomposition of a. Andas√pi = pi 6= pj =

√pj (for i 6= j) it even satisfies property (4) of

minimal primary decompositions. Now assume that for some indexj ∈ 1 . . . k we had the equality⋂

i 6=jpi = a ⊆ pj

Then by (2.11.(ii)) there would be some i 6= j such that a ⊆ pi ⊆ pj .And by minimality of pj this would mean pi = pj , a contradiction.Thus (p1, . . . , pk) also satisfies property (3), that is it is a minimalprimary decomposition of a. In fact this even is ass(p1, . . . , pk) =iso(p1, . . . , pk), that is every associated ideal is an isolated component.Now assume that (a1, . . . , al) is any minimal primary decompositionof a, then by (2) we have l = k and

p1, . . . , pk = ass(p1, . . . , pk)

= iso(p1, . . . , pk)

= iso(a1, . . . , ak)

That is for any i ∈ 1 . . . k we have ai ∈ iso(a1, . . . , ak) = p1, . . . , pkand this again means that for any i ∈ 1 . . . k there is some σ(i) ∈ 1 . . . ksuch that ai = pσ(i). Clearly σ is injective, because if we assumeσ(i) = σ(j) then ai = pσ(i) = pσ(j) = aj in contradiction to theminimality of (a1, . . . , ak). But as k is finite this already means that σis bijective, that is σ ∈ Sk. Altogether (a1, . . . , ak) is just a rearranged

447

Page 448: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

version of (p1, . . . , pk).

2

Proof of (2.86):

(i) Let (a1, . . . , ak) be a minimal primary decomposition of a (which exists,due to (2.84)) and pi :=

√ai. Then we have seen in (2.82.(iv)) that

ass(a) = p1, . . . , pk. Thus from⋂i ai = a ⊆ q we get ai ⊆ q for some

i ∈ 1 . . . k, according to (2.11.(ii)). Therefore pi =√ai ⊆

√q = q and

pi ∈ ass(a). Now let pj ⊆ q for some pj ∈ ass(a). Then we maychoose some minimal pi ∈ ass(a)∗ with pi ⊆ pj (as ass(a) is finite).And hence we get ai ⊆ pi ⊆ pj ⊆ q, that is ai ⊆ q. Finally considerai ⊆ q then a ⊆ ai ⊆ q is clear.

(ii) We stick to the notation used for (i) that is (a1, . . . , ak) is a minimalprimary decomposition and pi :=

√ai. ”⊆ ” if a ⊆ q then by (i)

there exists some ai ∈ iso(a) such that ai ⊆ q. Then a ⊆ ai ⊆ pi =√ai ⊆

√q = q. By minimality of q this yields q = pi ∈ ass(a)∗. ” ⊇ ”

conversely, if pi ∈ ass∗ then a ⊆ ai ⊆ pi. Thus if we consider anyprime ideal p i R such that a ⊆ p ⊆ pi, then by (i) there is somepj ∈ ass(a) such that pj ⊆ p ⊆ pi. By minimality of pi in ass(a) thismeans pj = pi and hence p = pi. Hence pi is a minimal prime idealcontaining a.

(iii) This follows immediately from (ii) and (2.20.(i)): by the propositionscited, we have

√a =

⋂q ∈ specR | a ⊆ q =

⋂ass(a)∗. And it

is clear that⋂

ass(a) ⊆⋂

ass(a)∗, as ass(a)∗ ⊆ ass(a). But on theother hand, if i ∈ ass(a) there some i∗ ∈ ass(a)∗ such that i∗ ⊆ i (asass(a) is finite). Thus we also have

⋂ass(a)∗ ⊆

⋂ass(a).

2

Proof of (2.87):We will first prove the ζ truly is a homomorphism of rings. By definitionwe have ζ(0) = 0R and ζ(1) = 1R. If now k ∈ N then ζ(−k) = k(−1R) =−(k1R) = −ζ(k) by general distributivity. Hence we have ζ(−k) = −ζ(k) forany k ∈ Z (because if k < 0 then −k ∈ N and hence −ζ(k) = −ζ(−− k) =− − ζ(−k) = ζ(−k)). This will be used to prove the additivity of ζ. Firstof all it is clear that for i, j ∈ N we get

ζ(i+ j) = (i+ j)1R = (i1R) + (j1R) = ζ(i) + ζ(j)

And likewise ζ((−i) + (−j)) = ζ(−(i + j)) = −ζ(i + j) = −(ζ(i) + ζ(j)) =−ζ(i)+−ζ(j) = ζ(−i)+ζ(−j). Now assume i ≥ j then by general associativ-ity it is clear that ζ(i+(−j)) = ζ(i−j) = (i−j)1R = (i1R)− (j1R) = ζ(i)−ζ(j) = ζ(i)+ζ(−j). And if i ≤ j then ζ(i+(−j)) = ζ(−(j−i)) = −ζ(j−i) =−ζ(j) + ζ(−i) = ζ(−j) +−ζ(−i) = ζ(i) + ζ(−j). Thus in any case we haveζ(i+(−j)) = ζ(i)+ζ(−j) and likewise we see ζ((−i)+j) = ζ(−i)+ζ(j). Thuswe have finally arrived at ζ(i+j) = ζ(i)+ζ(j) for any i, j ∈ Z. For the mul-tiplicativity things are considerably simpler: let i, j ∈ N again, then by thegeneral rule of distributivity we get ζ(ij) = (ij)1R = (i1R)(j1R) = ζ(i)ζ(j).

448 18 Proofs - Commutative Algebra

Page 449: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

And as also ζ(−k) = −ζ(k) this clearly yields ζ(ij) = ζ(i)ζ(j) for anyi, j ∈ Z. Thus ζ is a homomorphism of rings. And if ϕ : Z → Ris any homomorphism, then ϕ(1) = 1R. By induction on k ∈ N we seeϕ(k) = ϕ(1 + · · · + 1) = ϕ(1) + · · · + ϕ(1) = 1R + · · · + 1R = k1R = ζ(k).And of course ϕ(−k) = −ϕ(k) = −ζ(k) = ζ(−k) such that ϕ = ζ. This alsois the uniqueness of ζ.

Thus im(ζ) is a subring of R by virtue of (1.71.(v)). Let us denote theintersection of all subrings of R by P - note that this is a subring again, dueto (1.47). From the construction of P it is clear, that P ⊆ im(ζ) ≤r R.But as 1R ∈ P we have and hence ζ(k) = k1R ∈ P f0r any k ∈ N. Likewise−1R ∈ P and hence ζ(−k) = k(−1R) ∈ P . This is the converse inclusionim(ζ) ⊆ P and hence im(ζ) = P .

Now kn(ζ) i Z is an ideal in Z due to (1.71.(v)) again. But as (Z, α)is an Euclidean domain Z is a PID due to (2.65.(iii)). Hence there is somen ∈ Z such that kn(ζ) = nZ. Thereby n is determined up to associateness.And as Z∗ = 1,−1 this is n is uniquely determined up to sign ±n. Thusby fixing n ≥ 0 it is uniquely determined. Now

kn(ζ) = k ∈ Z | ζ(k) = 0R = ±k | k ∈ N, ζ(k) = 0R = ±k | k ∈ N, k1R = 0R

Thus if n = 0 then kn(ζ) = 0 and hence k1R = 0R implies k ∈ kn(ζ) = 0which is k = 0. And if n 6= 0 then kn(ζ) 6= 0. And hence we have seen in(2.65.(iii)) that α(n) = |n| = n is minimal in kn(ζ). And due to the explictrealisation of the kernel above this is n = min 1 ≤ k ∈ N | k1R = 0 .

2

Proof of (2.89):

(i) As R is an integral domain, so are its subrings P ≤r R. In particularim(ζ) ≤r R is an integral domain. But by definition and (1.77.(ii))we’ve got the isomorphy (where n := charR)

Z/nZ = Z/

kn(ζ)∼=r im(ζ) = prrR ≤r R

Hence Z/nZ is an integral domain, too and by (2.9) this implies thatnZ = Z or nZ is a prime ideal in Z. If we had nZ = Z, then n = 1,which would mean 1R = 0R in contradiction to R 6= 0. This onlyleaves, that nZ is a prime ideal. But this again means that n = 0 orthat n is prime because of (2.48.(ii)).

(ii) Suppose n := charR = 0 and kR = 0R, then k ∈ kn(ζ) = nZ = 0which is k = 0. Conversely suppose n ∈ nZ = kn(ζ), that is nR =ζ(n) = 0R and by assumption this implies n = 0. We now provethe second equivalency. That is we assume n = 0 and want to showζ : Z ∼=r prrR. But as in (i) we find the following isomorphies

Z ∼=rZ/

0Z = Z/kn(ζ)

∼=r im(ζ) = prrR

where k 7→ k + 0Z = k + kn(ζ) 7→ ζ(k) - which precisely is ζ again.Conversely suppose Φ : Z ∼=r prrR for some isomorphism Φ. If we

449

Page 450: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

had n 6= 0, then Zn = Z/nZ would be finite. But as we have theisomorphy Z ∼=r prrR ∼=r Zn once more this cannot be (as wethereby find the contradiction ∞ = #Z = #Zn = n <∞).

(iii) If n := charR 6= 0, then we have already proved the minimality ofn, that is n = min 1 ≤ k ∈ N | kR = 0R in (2.87). Converselysuppoese n is this minimum and denote c := charR. As cR = 0R andn is minimal with this property we have n ≤ c. On the other hand wehave ζ(n) = nR = 0R and hence n ∈ kn(ζ) = cZ. That is c | n andhence c ≤ n, altogether n = c = charR. And n 6= 0 is trivial (as nis contained in a subset of k | 1 ≤ k ∈ N). The second equivalencyresults from the following isomorphy again (refer to (1.77.(ii))

Zn = Z/nZ = Z/

kn(ζ)∼=r im(ζ) = prrR

where k + nZ = k + kn(ζ) 7→ ζ(k). Thus if charR = n 6= 0 we aredone already. Conversely suppose there was some isomorphism Φ suchthat Φ : Zn ∼=r prrZ. Then we had (where c := charR again)

Zc = Z/kn(ζ)

∼=r im(ζ) = prrR ∼=r Zn

In particular we find c = #Zc = #Zn = n (as n 6= 0 and hence c 6= 0).That is we have found n = c = charR and n 6= 0 has been assumed.

(iv) If n := charR = 0, then by (ii) we have ζ : Z ∼=r prrR. Andas prfR is the quotient field of prrR this implies Q ∼=r prfRunder the isomorphy a/b 7→ ζ(a)ζ(b)−1. Likewise if n := charR 6= 0then by (iii) we have Zn ∼=r prrR. As R is a field it is a non-zerointegral domain. Thus by (i) n is prime. Hence nZ is a non-zeroprime ideal and hence maximal (due to (2.65.(i))), such that Zn is afield. As prfR is the quotient field of (the field) prrR this impliesZn ∼=r prrR = prfR. Conversely if Q ∼=r prfR, then charR = 0as else Zn ∼=r prfR ∼=r Q (where n = charR 6= 0) by what we havejust proved. And if Φ : Zn ∼=r prfR for some isomorphism Φ, thenwe have nR = n1R = nΦ(1 + nZ) = Φ(n + nZ) = Φ(0 + nZ) = 0R.Thus if c := charR, then n ∈ kn(ζ) = cZ and hence c | n. On theother hand we have 0R = cR = c1R = cΦ(1 + nZ) = Φ(c+ nZ). Fromthe injectivity of Φ we get c+nZ = 0 +nZ and hence n | c. As bothn and c are positive this only leaves n = c.

(v) First suppose we had c := charR = 0, then ζ : Z → R would beinjective (as kn(ζ) = cZ = 0), which contradicts R being finite. Thuswe have c 6= 0, now let P := prrR, then by (iii) we have P ∼=r Zc andhence #P = #Zc = c. But - being a subring - P ≤g R is a subgroup(under the addition + on R). Thus by the theorem of Lagrange (1.7)we have c = #P | #R.

2

Proof of (2.90):

450 18 Proofs - Commutative Algebra

Page 451: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(i) Consider a unit a ∈ R∗ of R and suppose ab = 0, then b = a−10 = 0and hence a is no zero-divisor of R. Conversely suppose that a 6∈ zdRis a non-zero-divisor of R. Then the mapping

R → R : b 7→ ab

is injective (as ab = ac implies a(b− c) = 0 and as a 6∈ zdR this yieldsb − c = 0 such that b = c). But as R is finite any injective map onR also is surjective (and vice versa). Hence there is some b ∈ R suchthat ab = 1. But this means that a ∈ R∗ is a unit of R.

(ii) If f = pa then we easily have (f + pnS)(pn−1 + pnS) = pna + pnS =0 + pnS. And as S is an integral domain we have pn |6 pn−1 or inother words pn−1 + pnS 6= 0 + pnS. And hence f + pnS is a zero-divisor of S/pnS. Conversely suppose (f + pnS)(g + pnS) = 0 + pnSfor some g+ pnS 6= 0 + pnS. This means pn |6 g and hence we obtain awell-defined l ∈ 1 . . . n− 1 by letting

l := g[p] := max k ∈ N | pk | g

If we now write g = plb then by assumption we have pn | fg = fplband hence p | pn−l | fb. As p is prime we get p | f or p | b. But asl has been chosen maximally p | b is absurd, which only leaves p | f .

(iii) As R 6= 0 (because of pnS ⊆ pS 6= S) we have already seen theinclusions nilR ⊆ zdR ⊆ R \ R∗ in (1.43.(ii)). And by (ii) wealready know zdR = (p+ pnS)R. And if pa+ pnS ∈ zdR then

(pa+ pnS)n = pnan + pnS = 0 + pnS

which also proves zdR ⊆ nil(R). Thus it only remains to prove theinclusion R \ R∗ ⊆ (p+ pnS)R. Thus consider x = f + pnS 6∈ R∗, inparticular we have f 6∈ S∗ (as else x−1 = f−1 + pnS). Thus f has atrue (i.e. r ≥ 1) prime factorisation

f = αq where q :=

r∏i=1

qi

with α ∈ S∗ and qi ∈ S prime. Now suppose that p is not associatedto any of the qi (formally that is ¬(p ≈ qi) for any i ∈ 1 . . . r) then wehad gcd(pn, q) = 1. The lemma of Bezout (2.68.(iii)) then implies

pnS + qS = gcd(pn, q)S = S

Therefore there are a and b ∈ S such that apn + bq = 1 or in otherwords bq = 1− apn. Now let g := apn + α−1b, then it is clear that

fg = αq(apn + α−1b) = (αaq − a)pn + 1

And hence (f +pnS)(g+pnS) = 1 +pnS which implies x−1 = g+pnSand in particular x ∈ R∗, a contradiction. Hence there has to be somei ∈ 1 . . . r such that p ≈ qi. And therefore we find p ≈ qi | f such

451

Page 452: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

that p | f , or in other words again f ∈ (p+ pnS)R. Thus we got

nilR = zdR = R \R∗ = (p+ pnS)R

Now consider any prime ideal p i R, then it is clear that nilR ⊆p ⊆ R \R∗. But by the equality nilR = R \R∗ this means p = nilR.Therefore nilR is the one and only prime ideal of R. But nilR alreadyis maximal - if nilR ⊂ a i R, then there is some α ∈ a ∩ R∗ andhence a = R. Thus we have also proved the second equality

specR = smaxR = nilR

(iv) We will poove the claim in several steps: (1) given any commutativering R and any ideal a i R let us first denote the following set

Xn(R, a) := p ∈ specR | a ⊆ p, #R/p ≤ n

Then it is clear that a ⊆ b =⇒ Xn(R, b) ⊆ Xn(R, a) (because if weare given p ∈ Xn(R, b), then a ⊆ b ⊆ p and hence a ⊆ p, such thatp ∈ Xn(R, a)). Further Xn(R, a b) ⊆ Xn(R, a) ∪ Xn(R, b) (becauseif we are given p ∈ Xn(R, a b) then a b ⊆ p and as p is prime thisimplies a ⊆ p or b ⊆ p such that p ∈ Xn(R, a) or p ∈ Xn(R, b)). Nowsuppose a ⊆ b i R, then we will prove

#Xn(R, b) = #Xn

(R/

a,b/a

)This is clear once we prove that p 7→ p/a is a bijection from Xn(R, b)to Xn(R/a, b/a). But by the correspondence theorem (1.61) (and theremarks following it) p i R/a is prime if and only if p = p/a for somep i R prime, with a ⊆ p. And due to the correspondence theoremwe also have b ⊆ p if and only if b/a ⊆ p/a. Finally we’ve got thethird isomorphism theorem (1.77.(iv)) which provides the isomorphyR/p ∼=r (R/a)/(p/a). In particular we have #R/p = #(R/a)/(p/a).Thus p 7→ p/a is well-defined and surjective. But the injectivity isclear (by the correspondence theorem) and hence it is bijective. In asecond step (2) let us now denote the set

A := a i R | #Xn(R, a) =∞

Suppose A 6= ∅ was non-empty, then (as R is noetherian) there wouldbe a maximal element a∗ ∈ A∗. We now claim that this maximalelement a∗ is prime. It is clear, that a∗ 6= R, as R/R = 0 has not asingle prime ideal, in particular #Xn(R,R) = 0. Thus suppose therewould be some a, b ∈ R such that ab ∈ a∗ but a, b 6∈ a∗. Then we leta := a∗+aR and b := a∗+ bR and thereby get a∗ ⊂ a, b and a b ⊆ a∗.Thus by the inclusions in (1) we would find

Xn(R, a∗) ⊆ Xn(R, a b) ⊆ Xn(R, a) ∪Xn(R, b)

But as a∗ is maximal in A we have a, b 6∈ A. That is both Xn(R, a)and Xn(R, b) are finite. Thus by the above inclusion Xn(R, a∗) is

452 18 Proofs - Commutative Algebra

Page 453: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

finite, too, in contradiction to a∗ ∈ A - thus a∗ is prime. For the thirdstep (3) let Q := R/a∗, then by (2) Q is an integral domain. And if0 6= u i Q is a non-zero ideal, then by the correspondence theroremthere is some ideal b i R with a∗ ⊂ b such that u = b/a∗. But as a∗

has been maximal the equality in (1) yields

#Xn(Q,u) = Xn

(R/

a∗,b/a∗)

= #Xn(R, b) < ∞

(4) the same reasonin yields #Xn(Q, 0) = #Xn(R, a∗) = ∞, suchthat Q is infinite (if it was finite, that so was its spectrum and hencethe subset Xn(Q, 0) was finite as well). Thus we may choose pairwisedistince elements f1, . . . , fn+1 ∈ Q and let

f :=∏i 6=j

(fi − fj) ∈ Q

as fi−fj 6= 0 for any i 6= j and Q is an integral domain we have f 6= 0as well. Now consider any prime ideal q i Q, if we have f 6∈ q then(as q is an ideal) for any i 6= j ∈ 1 . . . n+ 1 we have fi − fj 6∈ q. Thusfor any i 6= j ∈ 1 . . . n+1 we have fi+q 6= fj+q. That is Q/q contains(at least) n + 1 distinct elements and hence q 6∈ Xn(Q, 0). Thus forany q ∈ Xn(Q, 0) we necessarily have f ∈ q and hence

Xn(Q, 0) = Xn(Q, fQ)

But as f 6= 0 we have fQ 6= 0 such that (3) yields #Xn(Q, fQ) <∞.By the equalities above this means ∞ = #Xn(R, a∗) = #Xn(Q, 0) =Xn(Q, fQ) < ∞ a contradiction. This means that our assumptionA 6= ∅ has to be false. In other words A = ∅ and in particular 0 6∈ A.That is #Xn(R, 0) <∞ which is a trivial reformulation of the claim.

2

Proof of (2.92):The implications (a) =⇒ (b) and (a) =⇒ (c) are trivial. Thus it onlyremains to check the converse implications. The proof of (c) =⇒ (a) iselementary and has been given in (2.90.(i)). Yet (b) =⇒ (a) requiresknowledge of group actions (which will be presented in book 2, or any otherstandard text on algebra) and cyclotomic fields (which also will be presentedin book 2, or any standard text on Galois theory). Never the less we wantto give a proof of (b) =⇒ (a) and the reader is asked to refer to the sourcesgiven for the statements not contained in this text:

Let us regard the multiplicative group F ∗ = F \ 0 (this truly is a group,as F is a skew-field) of F . And let us denote the center of the ring F by

C := a ∈ F | ∀x ∈ F : ax = xa

It is obvious that C ≤r F is a commutative subring of F and hence it evenis a field. Thus (F,+) is a C-vectorspace under its own multiplication. Andas F is finite it clearly has to be finite-dimensional n := dimC F <∞. If we

453

Page 454: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

denote q := #C then this implies

#F ∗ = #F − 1 = #Cn − 1 = qn − 1

For any x ∈ E∗ let us now denote the conjugacy class, the centralizer andthe centralizer appended by 0, using the notations

con(x) :=yxy−1 | y ∈ E∗

cen(x) := y ∈ E∗ | xy = yx C(x) := y ∈ E | xy = yx

It is well-known from group theory that the conjugacy classes con(x) forma partition of F ∗ and that con(a) = a for any a ∈ C. Thus let usdenote by X ⊆ F \C a representing system of the conjugacy classes, that isX ←→ con(x) | x ∈ F \ C under x 7→ con(x). As the conjugacy classesform a partition of F ∗ we find the equality

qn − 1 = #F ∗ =∑

06=a∈C#con(a) +

∑x∈X

#con(x)

= #C∗ +∑x∈X

#con(x)

= (q − 1) +∑x∈X

#con(x)

It is clear that (C(x),+) is a subgroup of the additive group (F,+). Butit even is a C-subspace of F , we denote its (finite) dimension by n(x) :=dimC C(x), then we get

#cen(x) = #C(x)− 1 = #Cn(x) − 1 = qn(x) − 1

We now apply the orbit formula #F ∗ = #con(x) ·#cen(x). Yet as #con(x)is an iteger we find the divisibility #cen(x) | #F ∗. Using division withremainder we obtain

qn − 1 : qn(x) − 1 = qn−n(x) + qn−2n(x) + · · ·+ qn−r(x)n(x)

As this division leaves no remainder we necessarily find n = r(x)n(x),i.e. n(x) divides n. On the other hand the orbit formula yields

#con(x) =qn − 1

qn(x) − 1

Substituting this into our partition formula for the cardinality of F ∗ we find

qn − 1 = (q − 1) +∑x∈X

qn − 1

qn(x) − 1

This is the point where the cyclotomic polynomials Φm ∈ Z[t] come intoplay. Let m ∈ N and ωm := exp(2πi/m) ∈ C, further denote k ⊥ m :⇐⇒k ∈ 1 . . .m and gcd(k,m) = ±1. Then the m-th cyclotomic polynomial is

454 18 Proofs - Commutative Algebra

Page 455: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

defined to beΦm :=

∏k⊥m

(t− ωkm)

In book 2 it will be shown that Φm ∈ Z[t] has integer coefficients in fact and(if we denote d | m :⇐⇒ d ∈ 1 . . .m and d | m) satisfies∏

d | m

Φd = tm − 1

We now assume F 6= C (i.e. F is not commutative) and regard any x ∈ X.It is clear that we find Φn(q) | qn − 1. But as x 6∈ C we get #con(x) 6= 1and hence n 6= n(x) on the other hand we have seen n(x) | n and hence

(tn(x) − 1)Φn =

∏d | n(x)

Φd

Φn

∣∣∣∣ ∏d | n

Φd = tn − 1

Thus we get Φn(q) | (qn − 1)/(qn(x) − 1), too and substituting this andΦn(q) | qn − 1 into the equation of the partition we obtain

Φn(q)

∣∣∣∣ (qn − 1)−∑x∈X

qn − 1

qn(x) − 1= q − 1

In particular we get Φn(q) ≤ q− 1. But this clearly cannot be - as q ≥ 2 (atleast 0 and 1 ∈ C) we see by the triange inequality that |q − 1| < |q − wkn|for any k ∈ 1 . . . n− 1. Hence we arrived at a contradiction (to C 6= F )

q − 1 ≥ |Φn(q)| =∏k⊥n|q − ωkn| >

∏k⊥n|q − 1| ≥ q − 1

2

Proof of (2.94):

(i) R∗ and nzdR are multiplicatively closed, due to (1.43). And if con-versely uv ∈ R∗, then there is some a ∈ R such that 1 = a(uv) = u(av).Hence we also have u ∈ R∗. Likewise, if uv ∈ nzdR and a ∈ R thenau = 0 implies auv = 0 and hence a = 0 such that u ∈ nzdR.

(iii) 1 = 1 · 1 ∈ UV is clear and if u, u′ ∈ U and v, v′ ∈ V , then for any uvand u′v′ ∈ UV we get (uv)(u′v′) = (uu′)(vv′) ∈ UV again.

(iv) Immediate: if a and b ∈ a then (1+a)(1+ b) = 1+(a+ b+ab) ∈ 1+a.

(v) Clearly 1 ∈ ϕ(U), as 1 ∈ U and ϕ(1) = 1. And if p, q ∈ ϕ(U)then there are u, v ∈ U such that p = ϕ(u) and q = ϕ(v). Hencepq = ϕ(u)ϕ(v) = ϕ(uv) ∈ ϕ(U) since uv ∈ U . Likewise 1 ∈ ϕ−1(V ) isclear, as ϕ(1) = 1 ∈ V . And if u, v ∈ ϕ−1(V ) then ϕ(u), ϕ(v) ∈ V .thereby we get uv ∈ ϕ−1(V ) from ϕ(uv) = ϕ(u)ϕ(v) ∈ V . So letus finally suppose V is saturated and uv ∈ ϕ−1(V ). Then ϕ(uv) =ϕ(u)ϕ(v) ∈ V implies ϕ(u) ∈ V and hence u ∈ ϕ−1(V ), as well.

(vi) As U 6= ∅ there is some 1 ∈ U ∈ U and hence 1 ∈⋃U . And if now

uv ∈⋃U then there are some U , V ∈ U such that u ∈ U and v ∈ V .

455

Page 456: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

As U is a chain we may assume U ⊆ V and hence u ∈ V . This yieldsuv ∈ V and hence uv ∈

⋃U .

(vii) Since 1 ∈ U for any U ∈ U we also have 1 ∈⋂U . And if u, v ∈

⋂U ,

then u, v ∈ U for any U ∈ U . Thereby uv ∈ U , as U is multiplicativelyclosed and hence uv ∈

⋂U . Likewise if any U ∈ U is saturated, then

uv ∈⋂U implies uv ∈ U and hence u ∈ U for any U ∈ U . And the

latter translates into u ∈⋂U again.

(viii) Clearly 1 = u0 ∈ U and if ui and uj ∈ U then also uiuj = ui+j ∈ U .That is U is a multiplicatively closed set with u ∈ U . And if converselyV ⊆ R is any multiplicatively closed set with u ∈ V , then by inductionon k we also find uk ∈ V and hence U ⊆ V . Together this proves,that U is the smallest multiplicatively closed set containing u.

(ix) Let us define U to be the set given, then we need to show that U isthe intersection of all saturated multiplicatively closed subsets V ⊆ Rcontaining U . ”⊆ ”: Thus consider any saturated multiplicativelyclosed V ⊆ R with U ⊆ V . If now a ∈ U then there are b ∈ Rand u, v ∈ U such that uab = uv ∈ U ⊆ V . In particular a(ub) =uab ∈ V and as V is saturated this implies a ∈ V . Hence U ⊆ V iscontained in any such V . ” ⊇ ”: If a ∈ U that 1a1 = 1a and hencea ∈ U . Hence we have U ⊆ U so it remains to prove that U issaturated multiplicatively closed. Thus consider a and a′ ∈ U , that isuab = uv and u′a′b′ = u′v′ for sufficient b, b′ ∈ R and u, v, u′, v′ ∈ U .Then (uu′)aa′(bb′) = (uu′)(vv′) and hence aa′ ∈ U , as uu′, vv′ ∈ U .Conversely suppose aa′ ∈ U , that is u(aa′)b = uv for sufficient b ∈ R,u, v ∈ U . Then ua(a′b) = u(aa′)b = uv and hence a ∈ U .

2

Proof of (2.95):We first prove (b) =⇒ (a): consider any prime ideal p i R, then 1 ∈ R\p,as 1 ∈ p would imply p = R which is disallowed. And as p is both, primeand an ideal we get the following equivalence for all u, v ∈ R

uv ∈ p ⇐⇒ u ∈ p or v ∈ p

Hence if we go to complements (i.e. if we negate this equivelency) then wefind that R \ p is a saturated multiplicative set, since

uv ∈ R \ p ⇐⇒ u ∈ R \ p or v ∈ R \ p

And from this we find that R \⋃P is a saturated multiplicative set, as it is

the intersection of such sets R \ p for p ∈ P, formally

R \⋃P = R \

⋃p∈P

p =⋂p∈P

R \ p

Next we prove (a) =⇒ (a): fix a ∈ R \ U then also aR ⊆ R \ U : considerb ∈ R, then ab ∈ U - because of the saturation of U - would imply a ∈ U , incontradiction to a 6∈ U . And as U is multiplicatively closed we max choose

456 18 Proofs - Commutative Algebra

Page 457: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

an ideal p(a) maximal among the ideals b with aR ⊆ b ⊆ R \ U . And thisp(a) is prime (for all this refer to (2.14.(iv))). Thus we get

R \ U ⊆⋃a6∈U

aR ⊆⋃a6∈U

p(a) ⊆ R \ U

=⇒ U = R \⋃ p(a) | a 6∈ U

2

Proof of (2.96):

• We will first prove that ∼ is an equivalence relation: the reflexivity isdue to 1 ∈ U , as u1a = u1a implies (u, a) ∼ (u, a). For the symmetrysuppose (u, a) ∼ (v, b), i.e. there is some w ∈ U such that vwa =uwb. As = is symmetric we get uwb = vwa and this is (v, b) ∼ (u, a)again. To finally prove the transitivity we regard (u, a) ∼ (v, b) and(v, b) ∼ (w, c). That is there are p, q ∈ U such that pva = pub andqwb = qvc. Since p, q and v ∈ U we also have pqv ∈ U . Now compute(pqv)wa = qw(pva) = qw(pub) = pu(qwb) = pu(qvc) = (pqv)uc, thisyields that truly (u, a) ∼ (w, c) are equivalent, too.

• Next we need to check that the operations + and · are well-defined.Suppose a/u = a′/u′ and b/v = b′/v′ that is there are p and q ∈ Usuch that pu′a = pua′ and qv′b = qvb. Then we let w := pq ∈ U andcompute

u′v′w(av + bu) = vv′q(pu′a) + uu′p(qv′b)

= vv′q(pua′) + uu′p(qvb′)

= uvw(a′v′ + u′b′)

u′v′wab = (pu′a)(qv′b)

= (pua′)(qvb′)

= uvwa′b′

That is (a/u) + (b/v) = (av + bu)/uv = (a′v′ + b′u′)/u′v′ = (a′/u′) +(b′/v′) and (a/u)(b/v) = ab/uv = a′b′/u′v′ = (a′/u′)(b′/v′) respec-tively. And this is just the well-definedness of sum and product.

• Next it would be due to verify that U−1R thereby becomes a com-mutative ring. That is we have to check that + and · are associative,commutative and satisfy the distributivity law. Further that for anya/u ∈ U−1R we have a/u + 0/1 = a/u and a/u + (−a)/u = 0/1 andfinally that a/u·1/1 = a/u. But this can all be done in straightforwardcomputations, that we wish to omit here.

• It is obvious that thereby κ becomes a homomorphism of rings, κ(0) =0/1 is the zero-element and κ(1) = 1/1 is the unit element. Furtherκ(a + b) = (a + b)/1 = (a1 + b1)/1 · 1 = a/1 + b/1 = κ(a) + κ(b) andκ(ab) = ab/1 = ab/1 · 1 = (a/1)(b/1) = κ(a)κ(b).

• Next we wish to prove that κ is injective, if and only if U ⊆ nzdR.

457

Page 458: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

To do this we take a look at the kernel of κ

kn(κ) = a ∈ R | a/1 = 0/1 = a ∈ R | ∃u ∈ U : au = 0

By (1.74.(i)) κ is injective iff kn(κ) = 0 . That is iff for any u ∈ U weget au = 0 =⇒ a = 0. And this is just a reformulation of U ⊆ nzdR.

• So it remains to prove that κ is bijective if and only if U ⊆ R∗. Firstsuppose U ⊆ R∗ ⊆ nzdR by (1.43). Then as we have just seen κ isinjective. Now consider any a/u ∈ U−1R, then

a

u=

au−1u

u=

au−1

1= κ(au−1) ∈ im(κ)

Hence we found that κ also is surjective. Conversely suppose that κis bijective. In particular κ is injective and hence U ⊆ nzdR. But κalso is surjective, that is for any u ∈ U there is some b ∈ R such that1/u = κ(b) = b/1. That is there is some w ∈ U such that w = uwb.Equivalently w(ub− 1) = 0, but as w ∈ nzdR this implies ub = 1 andhence u ∈ R∗. As u has been arbitrary we found U ⊆ R∗.

2

Proof of (2.99):Two of the three implications are clear, thus we are only concerned with:if for any maximal ideal m i R we have a/1 = b/1 ∈ Rm then alreadya = b ∈ R. But since a/1 = b/1 is equivalent to (a− b)/1 = 0/1 it suffices

∀ m :a

1=

0

1∈ Rm =⇒ a = 0

By definition a/1 = 0/1 means that there is some u 6∈ m such that ua = 0.And the latter can again be formulated as u ∈ ann(a). Thus we have

∀ m : ann(a) ⊆/ m

Hence there is no maximal ideal m cntaining ann(a). But as the annulatorann(a) is an ideal this is only possible if ann(a) = R and hence a = 1 ·a = 0.

In the second claim it is clear that for any prime ideal p i R we getR ⊆ Rp ⊆ F := quotR. Hence it is clear that R ⊆

⋂pRp. And as

smaxR ⊆ specR it also is clear that⋂pRp ⊆

⋂mRm such that we only

have to verify⋂mRm ⊆ R. Thus consider any a/b ∈ F with a/b ∈

⋂mRm.

That is for any maximal ideal m i R ther are some a(m) ∈ R and b(m) 6∈ mauch that a/b = a(m)/b(m). Now define the ideal

b := 〈b(m) | m ∈ smaxR 〉i i R

Suppose we had b 6= R, then there would be some maximal ideal m such thatm ⊆ m. But clearly b(m) ∈ b although b(m) 6∈ m, a contradiction. That isb = R and hence 1 ∈ b. That is there is a finite subset M ⊆ smaxR andthere are r(m) ∈ R (where m ∈M) such that 1 =

∑m r(m)b(m). Now recall

458 18 Proofs - Commutative Algebra

Page 459: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

that a/b = a(m)/b(m), then we find a/b ∈ R by virtue of

a

b= 1 · a

b=

(∑m∈M

r(m)b(m)

)a

b=

∑m∈M

r(m)a(m) ∈ R

2

Proof of (2.100):In the first part we will prove the existance of the induced homomorphismϕ. And of course we start with the well-definedness of ϕ as a mapping. Thuslet us regard a/u = b/v ∈ U−1R. That is there is some w ∈ U such thatvwa = uwb. And thereby we get ϕ(a)ϕ(u)−1 = ϕ(b)ϕ(v)−1 from

ϕ(v)ϕ(w)ϕ(a) = ϕ(vwa) = ϕ(uwb) = ϕ(u)ϕ(w)ϕ(b)

Next we have to check that ϕ truly is a homomorphism of rings. To do thisconsider any a/u and b/v ∈ U−1R and compute

ϕ

(a

u· bv

)= ϕ

(ab

uv

)= ϕ(a)ϕ(b)ϕ(u)−1ϕ(v)−1

=(ϕ(a)ϕ(u)−1

)(ϕ(b)ϕ(v)−1

)= ϕ

(au

)· ϕ(b

v

(a

u+b

v

)= ϕ

(av + bu

uv

)=

(ϕ(a)ϕ(v) + ϕ(b)ϕ(u)

)ϕ(u)−1ϕ(v)−1

= ϕ(a)ϕ(u)−1 + ϕ(b)ϕ(v)−1 = ϕ(au

)+ ϕ

(b

v

)So we have proved the existence of the homomorphism ϕ. It remains toverify that this is the unique homomorphism such that ϕκ = ϕ. That is weconsider some homomorphism ψ : U−1R → S such that for any a ∈ R weget ψ(a/1) = ϕ(a). Then for any u ∈ U ⊆ R we get

ψ

(1

u

)= ψ

((u1

)−1)

= ψ(u

1

)−1= ϕ(u)−1

And thereby we find for any a/u ∈ U−1R that ψ(a/u) = ψ(a/1 · 1/u) =ψ(a/1)ψ(1/u) = ϕ(a)ϕ(u)−1 = ϕ. That is ψ = ϕ, that is ϕ is unique.

2

Proof of (2.102):

• By definition κ−1(u) is the set of all a ∈ R such that a/1 = κ(a) ∈ u,and this has already been the first claim. And κ−1(a) is an ideal of Rby virtue of (1.71.(vii)).

• ” ⊇ ” If a ∈ a then a/1 = κ(a) ∈ κ(a) ⊆ U−1a. And for any u ∈ U wehence also get a/u = (1/u)(a/1) ∈ U−1a. ”⊆ ” Conversely considerany x ∈ 〈κ(a) 〉i. I.e. there are ai ∈ a and xi = bi/vi ∈ U−1R such that

x =

n∑i=1

xiκ(ai) =

n∑i=1

aibivi

459

Page 460: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

We now let v := v1 . . . vn ∈ U and vi := v/vi ∈ U then a straightfor-ward computation shows x = a/v for some a :=

∑i aibivi ∈ a

x =n∑i=1

aibivi

=n∑i=1

aibiviv

=a

v

2

Proof of (2.104):

• First of all a : U is an ideal - 0 ∈ a : U since 0 = 1 ·0 ∈ a. Now considera, b ∈ a : U . That is there are some u, v ∈ V such that ua and vb ∈ a.This yields uv(a + b) = v(ua) + u(vb) ∈ a and hence a + b ∈ a : U .Likewise for any r ∈ R we get u(ra) = r(ua) ∈ a and hence ra ∈ a : U .

• Next it is clear that a ⊆ a : U , because if a ∈ a then a = 1 ·a ∈ a (dueto 1 ∈ U) and hence a ∈ a : U .

• We finally wish to prove (U−1a) ∩R = a : U . Consider any b ∈ a : U ,that is vb ∈ a for some v ∈ U . Then b/1 = vb/v ∈ U−1a and henceb ∈ (U−1a) ∩ R. Conversely let b ∈ (U−1a) ∩ R, by definition thismeans b/1 ∈ U−1a. That is there is some a ∈ a, u ∈ U such thatb/1 = a/u. That is there is some v ∈ U such that uvb = va ∈ a. Andas uv ∈ U this means b ∈ a : U .

2

Proof of (2.105):

• U−1(a ∩ b) = (U−1a) ∩ (U−1b): if x ∈ U−1(a ∩ b) then by definitionthere are some a ∈ a ∩ b and u ∈ U such that x = a/u. But asa ∈ a and a ∈ b this already means x ∈ U−1a and x ∈ U−1b. Ifconversely x = a/u ∈ U−1a and x = b/v ∈ U−1b then a/u = b/vmeans that there is some w ∈ U such that vwa = uwb ∈ a ∩ b. Hencex = (vwa)/(uvw) = (uwb)/(uvw) ∈ U−1(a ∩ b).

• U−1(a + b) = (U−1a) + (U−1b): if x ∈ U−1(a + b) then there are somea ∈ a, b ∈ b and u ∈ U such that x = (a + b)/u = (a/u) + (b/u) ∈(U−1a) + (U−1b). And if we are conversely given any a/u ∈ U−1a andb/v ∈ U−1b then a/u+ b/v = (au+ bv)/(uv) ∈ U−1(a + b).

• U−1(a b) = (U−1a) (U−1b): if x ∈ U−1(a b) then there are some f ∈ a band u ∈ U such that x = f/u. Again f is of the form f =

∑i aibi for

some ai ∈ a and bi ∈ b. Altogether we get

x =f

u=

n∑i=1

aibiu

=

n∑i=1

ai1

biu∈(U−1a

) (U−1b

)Conversely consider ai/ui ∈ U−1a and bi/vi ∈ U−1b. As always we letu := u1 . . . un and ui := u/ui ∈ U again, likewise v := v1 . . . vn andvi := v/vi ∈ U . Then ai/ui = (aiui)/u, bi/vi = (bivi)/v and

n∑i=1

aiui

bivi

=

n∑i=1

aiuiu

biviv

=1

uv

n∑i=1

uiaiviaibi ∈ U−1(a b)

460 18 Proofs - Commutative Algebra

Page 461: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• U−1√a =√U−1a: if x ∈ U−1

√a then x = b/v for some b ∈

√a and

v ∈ U . That is there is some k ∈ N such that bk ∈ a. And thereforexk = bk/vk ∈ U−1a. Hence x is contained in the radical of U−1a.Conversely consider any x = b/v such that there is some k ∈ N withxk = bk/vk ∈ U−1a. Then there is some a ∈ a and u ∈ U such thatbk/vk = a/u. That is uwbk = vkwa ∈ a for some w ∈ U . Hence(uwb)k = (uw)k−1(uwbk) ∈ a, that is uwb ∈

√a. And this agian yields

x = b/v = (uwb)/(uvw) ∈ U−1√a.

• Next we prove (a : U) : U = a : U . Clearly, as 1 ∈ U we have a : U ⊆(a : U) : U (as for any a ∈ a : U we have a = a · 1 ∈ (a : U) : U).Conversely if a ∈ (a : U) : U then there is some u ∈ U such thatau ∈ a : U . Again this means that there is some v ∈ U such thata(uv) = (au)v ∈ a. But as uv ∈ U this again yields a ∈ a : U .

• For (a ∩ b) : U = (a : U) ∩ (b : U) we use a straightforward reasoningagain: If a ∈ (a∩ b) : U then there is some u ∈ U such that au ∈ a∩ b.And this again means a ∈ a : U and a ∈ b : U . Conversely considera ∈ (a : U) ∩ (b : U). That is there are v, w ∈ U such that av ∈ a andaw ∈ b. Now let u := vw ∈ U then au = (av)w = (aw)v ∈ a ∩ b suchthat a ∈ (a ∩ b) : U .

• We turn our attention to√a : U =

√a : U . If a : U = R then we

get√a : U =

√R = R. And as a ⊆

√a we also have R = a : U ⊆√

a : U ⊆ R. And hence we get√a : U = R =

√a : U . Thus assume

a : U 6= R. If x ∈√a : U then by definition there is some k ∈ N such

that xk ∈ a : U . And this means xku ∈ a for some u ∈ U . Clearlywe have k ≥ 1 as else u ∈ a such that a : U = R. And thereby weget (xu)k = (xku)uk−1 ∈ a, too. But this means xu ∈

√a and hence

x ∈√a : U . Conversely if x ∈

√a : U then there is some u ∈ U such

that xu ∈√a. Thus there is some k ∈ N such that xkuk = (xu)k ∈ a.

But as uk ∈ U this yields xk ∈ a : U and hence x ∈√a : U .

• (u∩w)∩R = κ−1(u∩w) = κ−1(u)∩κ−1(w) = (u∩R)∩(w∩R) = a∩b.

• (u + w)∩R = (a + b) : U : if c ∈ (a + b) : U then there are some a ∈ a,b ∈ b and u ∈ U such that uc = a + b. That is (uc)/1 = (a + b)/1 =(a/1) + (b/1) ∈ u + w. And thereby also c/1 = (1/u)((uc)/1) ∈ u + w.But this finally is c ∈ (u + w) ∩ R. Conversely let c ∈ (u + w) ∩ R,that is c/1 ∈ u + w. Thereby we find a/u ∈ u and b/v ∈ w such thatc/1 = a/u+ b/v = (va+ub)/uv. Hence there is some w ∈ U such thatuvwc = vwa+ uwb ∈ a + b (as a/u ∈ u implies a/1 = (u/1)(a/u) ∈ uand hence a ∈ a, likewise b ∈ b). And as uvw ∈ U this is c ∈ (a+b) : U .

• (uw) ∩ R = (a b) : U : if c ∈ (a b) : U then there are some ai ∈ a,bi ∈ b and u ∈ U such that uc =

∑i aibi ∈ a b. Thereby (uc)/1 =∑

i(ai/1)(bi/1) ∈ uw and hence c/1 = (1/u)((uc)/1) ∈ uw whichmeans c ∈ (uw) ∩ R. And if conversely c ∈ (uw) ∩ R then there areai/ui ∈ u and bi/vi ∈ w such that

c

1=

n∑i=1

aiui

bivi

=n∑i=1

aibiuiviuv

∈ uw

461

Page 462: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

where as usual u := u1 . . . un, v := v1 . . . vn and ui := u/ui, vi :=v/vi ∈ U . From this equality we again get uvwc =

∑i aibiuivi ∈ a b

for some w ∈ U (as ai/ui ∈ u implies ai/1 = (ui/1)(ai/ui) ∈ u andhence ai ∈ a, likewise bi ∈ b). And as uvw ∈ U this is c ∈ (a b) : U .

•√u ∩ R =

√a: if b ∈

√u ∩ R then b/1 ∈

√u, that is there is some

k ∈ N such that (b/1)k = bk/1 ∈ u. Again this is bk ∈ u ∩ R = a andhence b ∈

√a. Conversely if b ∈

√a then bk ∈ a for some k ∈ N. Hence

(b/1)k = bk/1 ∈ u which yields b/1 ∈√u and thereby b ∈

√u ∩R.

2

Proof of (2.107):Nearly all of the statements are purely definitional. The two exceptions tothis are the equalities claimed for U−1specR and U−1smaxR: Consider anyprime ideal p i R with p ∩U = ∅. If now ua ∈ p then u ∈ p or a ∈ p, as pis prime. But u ∈ p is absurd, due to p∩U = ∅. This only leaves a ∈ p andhence p ∈ U−1idealR. Conversely let p iR be a prime ideal with p = p : U .If now u · 1 = u ∈ p ∩ U then 1 ∈ p : U = p in contradiction to p 6= R.Hence we get p∩U = ∅. Now recall that any maximal ideal is prime. Thenrepeating the arguments above we also find m = m : U ⇐⇒ m ∩ U = ∅ forany maximal ideal m i R.

2

Proof of (2.108):

• First of all it is clear that the mapping u 7→ u ∩ R = κ−1(u) mapsideals to ideals, as it is just the pull back along the homomorphismκ. Now suppose that ua ∈ u ∩ R - this is ua/1 ∈ u and hence a/1 =(1/u)(ua/1) ∈ u. But this again implies a ∈ u ∩R and hence u ∩R ∈U−1idealR. Conversely for any ideal a i R the set U−1a is an idealof U−1R by (2.102). Thus both of the mappings are well-defined.

• Next we remark that for any a ∈ U−1idealR and u i U−1R we obtain

(U−1a) ∩R = a : U = a

U−1(u ∩R) = a/u | a/1 ∈ u, u ∈ U = u

Thereby the first equality follows from (2.104) and the definition ofU−1idealR. And for the second equality consider a/u where a/1 ∈ u.Then we find that a/u = (1/u)(a/1) ∈ u as well. And if converselya/u ∈ u then we likewise get a/1 = (u/1)(a/u) ∈ u again. Altogetherthese equalities yield that the maps given are mutually inverse.

• If a ⊆ b and we consider any a/u ∈ U−1a then a ∈ a ⊆ b andhence a/u ∈ U−1b. And if conversely u ⊆ w and a ∈ u ∩ R thena/1 ∈ u ⊆ w such that a ∈ w ∩ R. That is both of the mappings areorder preserving.

• Next we want to prove that this correspondence preserves radical ide-als. Thus if a ∈ U−1idealR is a radical ideal then we have a =

√a and

462 18 Proofs - Commutative Algebra

Page 463: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

thereby U−1a is a radical ideal due to (2.105) (see below). Likewisewe see that if u ∈ idealU−1R is a radical ideal, then u ∩ R is radical,too: √

U−1a = U−1(√

a)

= U−1a√u ∩R =

√u ∩R = u ∩R

• Thus it only remains to verify that prime (resp. maximal) ideals aretruly correspond to prime (resp. maximal) ideals. For prime idealsthis is an easy, straightforward computation: ab/uv ∈ U−1p impliesab ∈ p thus a ∈ p or b ∈ p which again is a/u ∈ U−1p or b/v ∈ U−1p.Conversely ab ∈ r ∩ R implies ab/1 ∈ r and hence a/1 ∈ r ∩ R orb/1 ∈ r ∩ R. And for maximal ideals this is clear, as the mappingsgiven obviously preserve inclusions.

2

Proof of (2.112):

(i) We will first prove the injectivity of the map given: thus suppose f/uand g/v are mapped to the same point in (U−1R)[t]. Comparing thecoefficients for these polynomials we find that f [α]/u = g[α]/v ∈ U−1Rfor any α ∈ N. That is there is some w(α) ∈ U such that vw(α)f [α] =uw(α)g[α]. Without loss of generality assume deg f ≤ deg g and letw := w(0)w(1) . . . w(deg g) ∈ U . Then clearly also vwf [α] = uwg[α]for any α ∈ 0 . . . deg g and hence even for any α ∈ N. This now yields

vwf(t) =∞∑α=0

vwf [α]tα =∞∑α=0

uwg[α]tα = uwg(t)

And hence we have just seen that f/u = g/v as elements of U−1(R[t]).For the surjectivety we are given a polynomial f in (U−1R)[t]

f =∞∑α=0

f [α]

u(α)tα

Let now u := u(0)u(1) . . . u(deg(f)) ∈ U and u(α) := u/u(α) ∈ R,then it is clear that we get f/u 7→ f for the polynomial

f :=

∞∑α=0

u(α)f [α]tα

(ii) We regard the homomorphism R[t] 7→ Ru : f 7→ f(1/u). It is clearthat this is surjective, since atk 7→ a/uk reaches all the elements of Ru.We will now show that the kernel is given to be

kn(f 7→ f(1/u)) = (ut− 1)R[t]

The inclusion ” ⊇ ” is clear, as ut−1 7→ u/u−1/1 = 0/1. Thus supposef 7→ 0/1, then we need to show that ut − 1 | f in R[t]. Let f/1 :=κ(f) ∈ (U−1R)[t] then it is clear that 1/u is a root (f/1)(1/u) =

463

Page 464: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

f(1/u) = 0/1 of f/1 and hence t − 1/u | f/1 in (U−1R)[t]. Sinceu/1 is a unit in Ru multiplication with u/1 changes nothing and hence(ut− 1)/1 | f/1 ∈ (U−1R)[t]. Thus

f

1(U−1R)[t] ⊆ ut− 1

1(U−1R)[t]

But it is clear that the ideal f/1(U−1R)[t] corresponds to fR[t] for-mally (f/1(U−1R)[t]) ∩ R[t] = fR[t]. And likewise the other ideal(ut−1)/1f/1(U−1R)[t] corresponds to (ut−1)R[t]. As the correspon-dence respects inclusions we have finally found

fR[t] ⊆ (ut− 1)R[t]

This means f ∈ (ut−1)R[t] and hence we have also proved ”⊆ ”. Thusf 7→ f(1/u) induces the isomorphism given its inverse is obviouslygiven by a/uk 7→ atk + (ut− 1)R[t].

(iii) First of all U−1R is an integral domain by (2.110) and hence thequotient field quotU−1R is well-defined again. We will now prove thewell-definedness and injectivity of the map a/b 7→ (a/1)/(b/1). As Ris an integral domain (and 0 6∈ U) we get (for any a, b, c and d ∈ R)

a/1

b/1=c/1

d/1⇐⇒ ad

1=a

1· d

1=b

1· c

1=bc

1

⇐⇒ ∃u ∈ U : uad = ubv

⇐⇒ ad = bc

⇐⇒ a

b=c

d

The homomorphism property of a/b 7→ (a/1)/(b/1) is clear by def-inition of the addition and multiplication in localisations. Thus itonly remains to check the surjectivity: consider any (a/u)/(b/v) ∈quotU−1R, then we claim av/bu 7→ (a/u)/(b/v). To see this we haveto show (av/1)/(bu/1) = (a/u)/(b/v) and this follows from:

a

u

bu

1=

abu

u=

ab

1=

abv

v=

b

v

av

1

(iv) We first check the well-definedness and injectivity of the map given.This can be done simultaneously by th following computation (recall

464 18 Proofs - Commutative Algebra

Page 465: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

that R was assumed to be an integral domain and a, b 6= 0)

x/ah

(b/an)i=

y/aj

(b/an)k⇐⇒

(b

an

)k x

ah=

(b

an

)i xaj

⇐⇒ xbk

akn+h=

ybi

ain+j

⇐⇒ xain+jbk = yakn+hbi

⇐⇒ ai+kbh+j(xain+jbk − yakn+hbi

)= 0

⇐⇒ xain+i+j+kbh+j+k = yakn+h+i+kbh+i+j

⇐⇒ (ab)j+kxa(n+1)ibh = (ab)h+iya(n+1)kbj

⇐⇒ xa(n+1)ibh

(ab)h+i=ya(n+1)kbj

(ab)j+k

Hence we have a well-defined injective mapping and it is straight-forward to check that it even is a homomorphism of rings. Thusit only remains to check the surjectivity. That is we are given anyx/(ab)k ∈ Rab, then we obtain a preimage by

x/a(n+1)k

(b/an)k7→ xa(n+1)kb(n+1)k

(ab)n+1)k+k=

x

(ab)k

(v) Let us regard the map b/v 7→ (b+ a)/(v + a). We first check its well-definedness: suppose a/u = b/v ∈ U−1R, that is there is some w ∈ Usuch that vwa = uwb. Of course this yields (v + a)(w + a)(a + a) =(u + a)(w + a)(b + a). But as w ∈ U we also have w + a ∈ U/a andhence (a + a)/(u + a) = (b + a)/(v + a). And it is immediately clearthat this map even is a homomorphism of rings. Next we will prove itssurjectivity, that is we are given any (b+ a)/(v + a) ∈ (U/a)−1(R/a).As v+ a ∈ U/a there is some u ∈ U such that v+ a = u+ a. But fromthis we find b/u ∈ U−1R and b/u 7→ (b+ a)/(u+ a) = (b+ a)/(v + a).Thus it remains to prove

kn

(b

v7→ b+ a

v + a

)= U−1a =

au

∣∣∣ a ∈ a, u ∈ U

The inclusion ” ⊇ ” is clear, given a/u where a ∈ a we immediatelyfind that (a+ a)/(u+ a) = (0 · u+ a)/(1 · u+ a) = (0 + a)/(1 + a) = 0.Conversely consider some b/v ∈ U−1R such that (b + a)/(v + a) =0 = (0 + a)/(1 + a). That is there is some x + a ∈ U/a such thatxb + a = (x + a)(b + a) = 0 + a. And this means xb ∈ a. Further -since x+ a ∈ U/a - there is some w ∈ U such that x+ a = w+ a. Thatis a := w − x ∈ a. Now compute bw = bw − bx + bx = ba + bx ∈ a.Hence we found b/v = (bw)/(vw) ∈ U−1a. Thus according to theisomorphism theorem b/v 7→ (b+ a)/(v + a) induces the isomorphy asgiven in the claim.

(vi) Let us denote U := R \ p, as a ⊆ p we get U ∩ a = ∅ and ap = U−1ais just the definition. Hence the claim is already included in (v).

(vii) We first show that this mapping is well defined, i.e. u/1 6∈ p for u 6∈ p.But this is clear, since p = p∩R. Thus we concern ourselves with the

465

Page 466: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

surjectivety: given (r/ak)/(u/al) ∈ (Ra)p we see that

ral

uak7→ ral/1

uak/1=

1/ak+l

1/ak+l· ra

l/1

uak/1=

r/ak

u/al

And for the injectivety we have to consider (r/1)/(u/1) = (s/1)/(v/1).Hence there is some w/am 6∈ p such that vwr/am = uws/am ∈ Ra.Thus there is some n ∈ N such that am+nvwr = am+nuws ∈ R.But w/am 6∈ p means that w 6∈ p and as also a 6∈ p we get thatz := am+nw 6∈ p. But as vrz = usz this means that r/u = s/v ∈ Rp.

2

Proof of (2.110):

• R integral domain =⇒ U−1R integral domainSuppose (ab)/(uv) = (a/u)(b/v) = 0/1 ∈ U−1R. By definition of thelocalisation this means that there is some w ∈ U such that wab = 0.But w 6= 0 (as 0 6∈ U) and R is an itegral domain, thus we get ab = 0.And this again yields a = 0 or b = 0. Of course this yields a/u = 0/1or b/v = 0/1, that is U−1R is an itegral domain.

• R noetherian/artinian =⇒ U−1R noetherian/artinianSuppose that R is noetherian and consider an ascending chain of idealsu1 ⊆ u2 ⊆ u3 ⊆ . . . in U−1R. Let us denote the correspondingideals of R by ak := uk ∩ R. Then the ak form an ascending chaina1 ⊆ a2 ⊆ a3 ⊆ . . . of idelas of R, too. And as R is noetherian thismeans that there is some s ∈ N such that the chain stabilizes

a1 ⊆ a2 ⊆ a3 ⊆ . . . ⊆ as = as+1 = as+2 = . . .

Yet the ideals correspond to one and another, that is uk = U−1ak. Soif we transfer this stabilized chain back to U−1R we find that

u1 ⊆ u2 ⊆ u3 ⊆ . . . ⊆ us = us+1 = us+2 = . . .

the corresponding chain stabilized, too. And as the chain has beenarbitrary this means, that U−1R is noetherian, as well. The fact thatR artinian implies U−1R artinian can be proved in complete analogy.

• R PID =⇒ U−1R PIDConsider any ideal u i U

−1R, by the correspondence theorem (2.108)u corresponds to the ideal a := U ∩ R i R of R. But as R is a PIDthere is some a ∈ R such that a = aR. And the following computationyields that u = (a/1)U−1R is a principal ideal, too

u = U−1(aR) =

ab

u

∣∣∣∣ b ∈ R, u ∈ U =a

1U−1R

• R UFD =⇒ U−1R UFDIf 0 ∈ U then U−1R = 0 and hence U−1R is a UFD. Thus suppose0 6∈ U . By definition any UFD R is an integral domain, and hence

466 18 Proofs - Commutative Algebra

Page 467: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

U−1R is an integral domain, too. Thus it suffices to check that anya/u ∈ U−1R admits a decomposition into prime factors. To see thiswe will first prove the following assertion

p ∈ R prime =⇒ p

1∈(U−1R

)∗or

p

1∈(U−1R

)prime

Thus let p ∈ R be prime and assume p/1 | (a/u)(b/v) = (ab)/(uv).That is there is some h/w ∈ U−1R such that ph/w = ab/uv. Andas R is an integral domain this implies uvph = wab. In particularp | wab and as p is prime this means p | w or p | a or p | b.First suppose p | w, i.e. there is some γ ∈ R such that γp = w. Then(p/1)(γ/w) = w/w = 1 and hence p/1 ∈ (U−1R)∗ is a unit. Otherwise,if p | a - say αp = a - then (p/1)(α/u) = a/u. That is p/1 | a/u.Analogously we find p | b =⇒ p/1 | b/v. Thus either p/1 is aunit or prime, as claimed. Now we may easily verify that U−1R isan UFD: consider any 0 6= a/u ∈ U−1R. As R is an UFD we finda decomposition of a 6= 0 into prime elements a = αp1 . . . pk (whereα ∈ R∗ and pi ∈ R prime). Now let I :=

i ∈ I | pi/1 ∈ (U−1R)∗

be

the set of indices i for which pi/1 is a unit. Then

a

u=

α

u

p1

1. . .

pk1

=

u

∏i∈I

pi1

)∏i 6∈I

pi1

That is we have found a decomposition of a/u into (a unit and) primefactors. And as a/u has been an arbitrary non-zero element, thisproves that U−1R truly is an UFD.

• R normal =⇒ U−1R normalWe have already seen that U−1R is an integral domain, since R isan integral domain. To prove that U−1R is integrally closed in itsquotient field we will make use of the isomorphy (2.112.(iii))

Φ : quotR ∼→ quotU−1R :r

s7→ r/1

s/1

Recall that R is embedded canonically into quotR by a 7→ a/1. HencequotR is considered to be an R-algebra under the scalar multiplica-tion a(r/s) := (a/1) · (r/s) = ar/s. Analogously, for any a/w ∈ U−1Rand any r/u, s/v ∈ U−1R we have

a

w

r/u

s/v=

a/w

1/1· r/us/v

=ar/uw

s/v

Thus if we consider any a ∈ R and any r/s ∈ quotR, then theisomorphism ϕ satisfies Φ(ax) = (a/1)Φ(x), which can be seen in astraightforward computation

Φ(ax) = Φ(ars

)=

ar/1

s/1=

a/1

1/1· r/1s/1

=a

1

r/1

s/1=

a

1Φ(rs

)=

a

1Φ(x)

467

Page 468: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now consider any x/y ∈ quotU−1R, that is integral over U−1R. Thatis there are some n ∈ N and p0, . . . , pn−1 ∈ U−1R such that

(x

y

)n+n−1∑k=0

pk

(x

y

)k= 0

Then we need to show that x/y ∈ U−1R. To do this let us pickup some r/s ∈ quotR such that x/y = Φ(r/s). Further note thatpk ∈ U−1R, that is pk = ak/uk for some ak ∈ R and uk ∈ U and letu := u0 . . . un−1 ∈ U . Then multiplication with un/1 ∈ U−1R yields

(u1

Φ(rs

))n+

n−1∑k=0

akun−k

uk

(u1

Φ(rs

))k= 0

Now recall that (u/1)Φ(r/s) = Φ(u(r/s)) = Φ(ur/s). Further notethat due to k < n we have uk | u | un−k. Therefore we may definebk := aku

n−k/uk ∈ R. Inserting this in the above equation yields

Φ(urs

)n+

n−1∑k=0

bk1

Φ(urs

)k= 0

Again we use (bk/1)Φ(ur/s) = Φ(bk(ur/s)) and use the homorphismproperties of Φ to place Φ in front, obtaining an equation of the formΦ(. . . ) = 0. But as Φ has been an isomorphism it can be eliminatedfrom this equation ultimatly yielding

(urs

)n+

n−1∑k=0

bk

(urs

)k= 0

Thus we have found that ur/s ∈ quotR is integral over R. Hence byassumption on R we have ur/s ∈ R, that is there is some a ∈ R suchthat ur/s = a/1. And this is ur = as such that r/s = a/u ∈ U−1R.Thus we have finally found

x

y= Φ

(rs

)= Φ

(au

)=

a/1

u/1=

a/u

1/1∈ U−1R

• R Dedekind domain =⇒ U−1R Dedekind domainIf R is a Dedekind domain, then R is noetherian, integrally closed andspecR = 0 ∪ smaxR. As we have seen this implies, that U−1Ris noetherian and integrally closed, too. Thus consider a non-zeroprime ideal 0 6= w i U

−1R. Then m := w ∩ R i R is a non-zero prime ideal of R, too (if we had m = 0 then w = U−1m = 0, acontradiction). Thus m is maximal and hence w = mR is maximal,too. That is specU−1R = 0 ∪ smaxU−1R. Altogether U−1R is aDedekind domain, too.

2

468 18 Proofs - Commutative Algebra

Page 469: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Proof of (2.111):We have already proved (a) =⇒ (b) in (2.110) and (b) =⇒ (c) is trivial.Thus it only remains to verify (c) =⇒ (a): we denote F := quotR thequotient field of R and regard R and any localisation Rm as a subring of R(also refer to (2.99) for this). Now regard any x ∈ F integral over R, that isf(x) = 0 for some normed polynomial f ∈ R[t]. If we interpret f ∈ Rm[t],then f(x) = 0 again and hence x is integral over Rm (where m i R is anymaximal ideal of R). By assumption Rm is normal and hence x ∈ Rm again.And as m has been arbitrary the local global principle (2.99) yields:

x ∈⋂

m∈smaxR

Rm = R

2

Proof of (2.50): (continued)

• (c) =⇒ (d): property (2) of (c) and (d) are identical so we only needto verify (1) in (d). If all ai = 0 are zero then there is nothing to prove(s := 0). And if at least one ar 6= 0 is non-zero, then so are all ar+ifor i ∈ N. Omitting the a0 to ar−1 we may assume ai 6= 0 for anyi ∈ N. By assumption we have aiR ⊆ ai+1R for any i ∈ N. And thistranslates into ai+1 | ai. By (2.54.(iii)) this yields a descending chainof natural numbers for any prime element p ∈ R

〈p | a0〉 ≥ 〈p | a1〉 ≥ . . . ≥ 〈p | ai〉 ≥ . . . ≥ 0

Of course this means that the sequence 〈p | ai〉 has to become star-tionary. That is there is some s(p) ∈ N such that for all i ∈ N we get〈p | as(p)+i〉 = 〈p | as(p)〉. We now choose a representing system P ofthe prime elements of R modulo associateness. Then there only arefinitely many p ∈ P such that 〈p | a0〉 6= 0. And for those p ∈ P with〈p | a0〉 = 0 we may clearly take s(p) = 0. Thereby we may define

s := max s(p) | p ∈ P ∈ N

Then it is clear that for any p ∈ P and any i ∈ N we get 〈p | as+i〉 =〈p | as〉. And by (2.54.(iv)) this is nothing but as+i ≈ as which againtranslates into as+iR = asR. And this is just what had to be proved.

• (d) =⇒ (c): property (2) of (c) and (d) are identical so we only needto verify (1) in (c). But the proof of this would be a literal repetitionof (2.49.(iii)). There we have proved the existence of an irreducibledecomposition using the ascending chain condition of noetherian rings.But in fact this property has only been used for principal ideals andhence is covered by the assumption (1) in (d). Hence the same proofcan be applied here, too.

• (a) =⇒ (e): As p 6= 0 is non-zero, there is some 0 6= a ∈ R with a ∈ p.Clearly a 6∈ R∗ as else p = R. We now use a prime decomposition ofa, that is αp1 . . . pk = a ∈ p. Thereby k ≥ 1 as else a ∈ R∗. And as pis prime we either get α ∈ p or pi ∈ p for some i ∈ 1 . . . k. But againα ∈ p would yield p = R which cannot be. Thus p := pi ∈ p where pis prime by construction.

469

Page 470: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• (e) =⇒ (f): We assume R 6= 0 (else the stetements herin do not makesense!). It is clear that 0 6∈ D (as R is an integral domain α ∈ R∗ andpi 6= 0). Therefore D ⊆ R \ 0 such that we obtain the canonicalhomomorphism

D−1R→ quotR :a

d7→ a

d

Due to 0 6∈ D we also see that D−1R 6= 0 is not the zero-ring. There-fore D−1R contains a maximal ideal u i D

−1R - by (2.4.(iv)) - suchthat p := u ∩ R ∈ D−1smaxR is maximal too - by (2.108) - in par-ticular prime. Suppose p 6= 0 then by assumption (e) there is someprime element p ∈ p. And thereby p ∈ p ∩ D in contradiction top ∈ D−1smaxR. Thus p = 0 and therefore u = D−1p = 0, too. But0 ∈ D−1R being maximal yields that D−1R is a field. In particular thecanonical homomorphism is injective (as it is non-zero). Now chooseany a/b ∈ quotR then a, b ∈ R and hence a/1, b/1 ∈ D−1R. Yetb 6= 0 and hence (1/b)−1 = 1/b ∈ D−1R, as D−1R is a field. Thereforea/b ∈ D−1R such that the canonical homomorphism also is surjective.By construction this even is an equality of sets.

• (f) =⇒ (a): Consider any 0 6= a ∈ R. Then 1/a ∈ quotR = D−1R,that is there are b ∈ R and d ∈ D such that 1/a = b/d. And asR is an integral domain this is ab = d ∈ D. Yet D is saturated by(2.48.(vi)) and therefore a ∈ D. But this just means that a admits aprime decomposition.

2

Proof of (2.75):We have already proved statements (i) to (vii) on page (441), so it onlyremains to prove statement (viii). We have already seen in (vii) that themap b 7→ b ∩ R = κ−1(b) is well-defined (as κ−1(q) = q ∩ R = p again).Thus we next check the well-definedness of a 7→ U−1a. First of all we have√U−1a = U−1

√a = U−1p = q due to (2.105). Now consider a/u and

b/v ∈ U−1R such that (ab)/(uv) = (a/u)(b/v) ∈ U−1a but b/v 6∈ U−1R.Then ab ∈ (U−1a) ∩ R = a : U , that is there is some w ∈ U such thatabw ∈ a. But bw 6∈ a, as else b/v = (bw)/(vw) ∈ U−1a. As a is primary thisimplies a ∈

√a = p and hence a/u ∈ U−1p = q =

√U−1a. This means that

U−1a is primary again and hence a 7→ U−1a is well-defined.

2

Proof of (2.15):

(i) As the union ranges over all prime ideals p with p ⊆ zdR it is clearthat the union is contained in zdR again. That is the incluison ” ⊇ ”is trivial. For the converse inclusion let us denote U := nzdR, then Uis multiplicatively closed. Let now a ∈ zdR be a zero-divisor. Thatis we may choose some 0 6= b ∈ R such that ab = 0. This impliesaR ⊆ zdR (because given any x ∈ R we get (ax)b = x(ab) = x0 = 0)or in other words aR ∩ U = ∅. Thus by (2.4.(iii)) we may choose anideal p maximal among the ideals satisfying aR ⊆ b ⊆ R \ U = zdRand this is prime. In particular a ∈ p such that a is also contained inthe union of prime ideals contained in zdR.

470 18 Proofs - Commutative Algebra

Page 471: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) Let p be a minimal prime ideal of R, then by the correspondencetheorem (2.108) the spectrum of the localised ring Rp corresponds to

specRp ←→ q ∈ specR | q ⊆ p ←→ p

Hence Rp has precisely one prime ideal, namely the ideal p = (R\p)−1pcorresponding to p. And by (2.17.(v)) this yields

nilRp =⋂

specRp = p

Now consider any a ∈ p, then a/1 ∈ p and hence there is some k ∈ Nsuch that ak/1 = (a/1)k = 0/1. That is there is some u ∈ R \ p suchthat uak = 0. But k 6= 0 as else u = 0 ∈ p. Hence we may choose 1 ≤ kminimally such that uak = 0. Then we let b := uak−1 and thereby getb 6= 0 (as k has been minimal) and ab = 0. That is a ∈ zdR and hencep ⊆ zdR (as a has been arbitrary).

2

Proof of (2.113):(b) =⇒ (a): of course R = 0 cannot occur, as else R∗ = R and henceR \R∗ = ∅ was no ideal of R. Thus R 6= 0 and hence R contains a maximalideal, by (2.4.(iv)). And if m i R is any maximal ideal of R then we havem 6= R and hence m ⊆ R \R∗ 6= R. By maximality of m we get m = R \R∗which also is the uniqueness of the maximal ideal. (a) =⇒ (b): let m bethe one and only maximal ideal of R. As m 6= R we again get m ⊆ R \R∗.Conversely, if a ∈ R \ R∗ then aR 6= R. Hence there is a maximal idealcontaining aR, by (2.4.(ii)). But the only maximal ideal is m such thata ∈ aR ⊆ m. And as a has been arbitrary this proves R \ R∗ ⊆ m andhence m = R \R∗. In particular R \R∗ is an ideal of R.

2

Proof of (2.114):(a) =⇒ (b): consider any a 6= R, then by (2.4.(ii)) a is contained in somemaximal ideal, and the only maximal ideal available is m, such that a ⊆ m.(b) =⇒ (a): first of all m is a maximal ideal, as (due to (b)) m ⊆ a i Rimplies m = a or a = R. And if n i R is any maximal ideal of R, thenwe have n 6= R and hence (due to (b)) n ⊆ m. But by maximality of nand m 6= R we now conclude n = m. Altogether we have smaxR = mas claimed. (a) =⇒ (c): has already been proved in (2.113) - in the formR\R∗ = m. (c) =⇒ (d) is trivial, so it only remains to prove the implication(d) =⇒ (b): consider any ideal a 6= R again, then we have a ⊆ R \R∗ anddue to (d) this yields a ⊆ R \R∗ ⊆ m.

2

Proof of (2.116):By the correspondence theorem (2.108) it is clear that mp is in fact a maximalideal of Rp. Now suppose mp = (R\p)−1n i Rp is any maximal ideal of Rp.In particular mp is a prime ideal and hence (by the correspondence theoremagain) n ∩ (R \ p) = ∅. In other words that is n ⊆ p and hence np ⊆ mp.

471

Page 472: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

But by the maximality of mp this is np = mp. Thus mp is the one and onlymaximal ideal of Rp - that is Rp is a local ring.

Now regard Rp quotR/p : a/u 7→ (a + p)/(u + p). This is well-defined and surjective, since u + p 6= 0 + p if and only if u 6∈ p. And sinceR/p is an integral domain we have (a+ p)/(u+ p) = (0 + p)/(1 + p) if andonly if a+ p = 0 + p which is equivalent to a ∈ p again. Thus the kernel ofthis epimorphism is just mp and hence it induces the isomorphism given.

2

Proof of (2.119):

(U) As R 6= 0 we have 1 6= 0 and hence by property (1) we find ν(1) ∈ N.Thus by (2) we get ν(1) = ν(1 · 1) = ν(1) + ν(1) ∈ Z. And togetherthis implies ν(1) = 0. Now consider any α ∈ R∗ then by (2) again0 = ν(1) = ν(αα−1) = ν(α) + ν(α−1) ≥ ν(α) ≥ 0 hence ν(α) = 0.

(I) If we are given a, b ∈ R with ab = 0 then by (1) and (2) we have∞ = ν(ab) = ν(a) = ν(b). Thus ν(a) and ν(b) may not both be finteand by (1) this means a = 0 or b = 0.

(P) First of all 0 ∈ [ν] as by (1) we have ν(0) = ∞ ≥ 1 and 1 6∈ [ν] asν(1) = 0 < 1 by (U). If now a, b ∈ [ν] and c ∈ R then we compute

ν(a+ b) ≥ min ν(a), ν(b) ≥ 1

ν(ac) = ν(a) + ν(c) ≥ ν(a) ≥ 1

And hence a + b and ac ∈ [ν] again. Hence [ν] truly is an ideal of Rand [ν] 6= R as 1 6∈ [ν]. Now suppose ab ∈ [ν] for some a, b ∈ R, thatis ν(ab) = ν(a) + ν(b) ≥ 1. Then it cannot be that ν(a) = 0 = ν(b)and hence a ∈ [ν] or b ∈ [ν], which means [ν] is prime.

(iii) It is clear that 1 | a for any a ∈ R and hence 0 ∈ k ∈ N | pk | a.Therefore a[p] ∈ N ∪ ∞ is well-defined.

• As any pk | 0 it is clear that ν(0) = ∞. Conversely suppose thatν(a) = ∞ for some a ∈ R. Then for any k ∈ N we may choose someak ∈ R such that a = pkak. Then pk+1ak+1 = a = pkak and as R isan integral domain this implies pak+1 = ak, in particular ak+1 | ak.Going to ideals this yields an ascending chain

a0S ⊆ a1S ⊆ . . . ⊆ akS ⊆ . . .

As R is noetherian there is some s ∈ N such that asR = as+1R,i.e. there is some α ∈ R∗ such that as+1 = αas = αpas+1. If as+1 6= 0this would mean p = α−1 (as R is an integral domain) which is absurd,since p is prime. Thus we have as+1 = 0 and hence a = ps+1as+1 = 0.

• If g = 0 then ν(fg) =∞ = ν(f) + ν(g), thus we only need to consider0 6= f, g ∈ R. Now write f = pka and g = plb where k = f [p] andl = g[p], then by maximality we have p |6 a and p |6 b and hence p |6 ab,since p is prime. Hence ν(fg) = ν(pk+lab) = k + l = ν(f) + ν(g).

472 18 Proofs - Commutative Algebra

Page 473: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

• If g = 0 then ν(f+g) = ν(f) = min ν(f), ν(g) , thus we only need toconsider 0 6= f, g ∈ R. As above let us write f = pka and g = plb wherek = f [p] and l = g[p]. Without loss of generality we may assume k ≤ lthen f + g = pk(a + pl−kb) and hence ν(f + g) = ν(pk(a + pl−kb)) =k+ν(a+pl−kb) ≥ k = min ν(f), ν(g) . Thus by now ν is a valuationon R and it is normed, since ν(p) = 1.

• Continuing with the previous item now prove the fifth property: i.e. weeven have k < l. If we now had p | a+pl−kb, then there would be someh ∈ S such that a = ph− pl−kb. But this would imply p | a which isabsurd, by maximality of k. Hence ν(f + g) = k = min ν(f), ν(g) .

(iv) ν 7→ [ν] is well-defined:As ν is normed, we have [ν] 6= 0. Hence [ν] is a non-zero-prime ideal,and as R is a PID this even implies maximality.

• m 7→ νp is well-defined:If we have pR = m = qR then there is some α ∈ R∗ with αp = q andas (αp)k | a ⇐⇒ pk | a we then also get νq = ναp = νp.

• m 7→ νp 7→ [νp] = m:It is clear that [νp] = pR and as also m = pR by construction [νp] = m.

• ν 7→ [ν] 7→ νp = ν:By construction we have [ν] = pR, thus if a ∈ R with p |6 a thena 6∈ pR = [ν] and hence ν(a) = 0. And as ν is normed, there is someq ∈ S such that ν(q) = 1. Hence q ∈ [ν] = pR, which means p | q.Therefore 1 ≤ ν(p) (as p ∈ pR = [ν]) and ν(p) ≤ ν(q) = 1 (as p | q)imply ν(p) = 1. If now f ∈ R is given aribtarily then we write f = pkawhere p |6 a, then ν(f) = kν(p) + ν(a) = k = νp(f). And hence ν = νp.

2

Proof of (2.121):

(i) The properties (1), (2) and (3) of valued rings and discrete valuationrings are identical. Hence it suffices to check that R is non-zero. Butby (N) there is some m ∈ R with ν(m) = 1 6= ∞ and by (1) this ism 6= 0. In particular R 6= 0.

(ii) We have already seen in (??.(i)) that [ν] is a prime ideal of R. Thus by(2.114) it suffices to check R\ [ν] ⊆ R∗. Thus consider any a ∈ R\ [ν],that is ν(a) = 0 = ν(1) and hence ν(1) ≤ ν(a) and ν(a) ≤ ν(1). By(4) this implies 1 | a and a | 1 which again is a ≈ 1 or in otherwords a ∈ R∗.

(iii) By (i) and (??.(i)) we know that R is an integral domain. Thus con-sider a, b ∈ R with a 6= 0, then we need to find some q, r ∈ R suchthat b = qa+ r and (r = 0 or ε(r) < ε(a)). To do this we distinguishthree cases: if b = 0, then we may choose q := 0 and r := 0. If b 6= 0and ν(b) < ν(a), then we may let q := 0 and r := b. Thus if b 6= 0 andν(a) ≤ ν(b), then by (4) we get a | b. That is there is some q ∈ Rsuch that b = qa. Hence we are done by letting r := 0.

473

Page 474: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(iv) By induction on k ∈ N it is clear that ν(mk) = kν(m) = k. Thus forν(a) = k = ν(mk) we obtain a | mk and mk | a. That is a ≈ mk areassociates and hence there is some α ∈ R∗ such that a = αmk. Andthe uniqueness of α is clear, as R is an integral domain.

(v) Let a i R be any ideal of R, combining (iii) and (2.65.(iii)) we knowthat R is a PID. That is there is some a ∈ R such that a = aR. Incase a = 0 we have a = 0, else using (iv) we may write a as a = αmk

for some α ∈ R∗. Thus we have a = aR = αmkR = mkR as claimed.

(vi) As ν(0) = ∞ ≥ 0 and ν(1) = 0 ≥ 0 we have 0, 1 ∈ R. Also wehave ν(−1) = ν((−1)(−1)) = 2ν(−1) such that ν(−1) = 0 and hence−1 ∈ R, too. If now a, b ∈ R then by (3) we get a+ b ∈ R and by (2)also ab ∈ R. Altogether R is a subring of F . We will now prove that νis a discrete valuation on R again. Properties (1), (2), (3) and (N) areinherited trivially. And for (4) we are given a, b ∈ R with ν(a) ≤ ν(b).In case b = 0 we have a · 0 = 0 = b and hence a | b trivially. Thusassume b 6= 0, then we have 0 ≤ ν(a) ≤ ν(b) < ∞ and hence a 6= 0,as well. Therefore we may let x := ba−1 ∈ F . Then by (2) we getν(b) = ν(xa) = ν(x) + ν(a), such that ν(x) = ν(b) − ν(a) ≥ 0. Thisagain means x ∈ R and hence a | b. It only remains to verify, that Ftruly is the quotient field of R. Thereby the inclusion ” ⊇ ” is trivial.Hence consider any x ∈ F , if ν(x) ≥ 0 then x = x · 1−1 and we aredone. Else we have ν(x−1) = −ν(x) > 0 and hence x−1 ∈ R. Thus welikewise obtain x = 1 · (x−1)−1.

2

Proof of (2.123):

• (a) =⇒ (c): If (R, ν) is a discrete valuation ring, then R already isa local ring (with maximal ideal [ν]) and an Euclidean ring, due to(2.121). But the latter implies that R is a PID, by (2.65.(iii)). Andof course R cannot be a field, as there is some m ∈ R with ν(m) = 1(and hence we have both m 6= 0 and m 6∈ R∗ as else ν(m) = 0).

• (c) =⇒ (d): As R is a PID it is a UFD according to (2.65.(ii)). Nowconsider two prime elements p, q ∈ R, that is the ideals pR and qRare non-zero prime ideals. Hence - as R is a PID - pR and qR alreadyare maximal by (2.65.(i)). But as R also is local this means pR = qR.

• (d) =⇒ (b): Choose any 0 6= n ∈ R \ R∗ which is possible, as Ris not a field. As R is an UFD we may choose some prime factorm of n and let m := mR. Now consider any 0 6= a ∈ R, as R isan UFD a admits a primary decomposition a = αp1 . . . pk. But byassumption any prime factor pi is associated to m, that is a admitsthe representation a = ωmk for some ω ∈ R∗ and k ∈ N. If we hada 6∈ R∗ then we necessarily have k ≥ 1 and hence a ∈ mR = m. Thuswe have proved R\R∗ ⊆ m. And as m 6∈ R∗ we have m 6= R such thatR is a local ring with maximal ideal m, according to (2.114). And thisagain yields R \m = R∗ as claimed. Now suppose 0 6= a ∈

⋂km

kR, aswe have seen we may write a = ωmk for some ω ∈ R∗ and k ∈ N. Butby assumption on a we also have a = bmk+1 for some b ∈ R. Dividing

474 18 Proofs - Commutative Algebra

Page 475: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

by mk (R is an integral domain) we get ω = bm and hence m ∈ R∗ -a contradiction. Thus we also got

⋂km

kR = 0.

• (b) =⇒ (a): Let ν : R → N ∪ ∞ : a 7→ supk ∈ N | mk | a,then we will prove that ν is a discrete valuation on R: (1) if a = 0then for any k ∈ N we clearly have mk | a and hence ν(0) = ∞.Conversely if ν(a) = ∞, then we have mk | a for any k ∈ N andhence a ∈

⋂km

kR = 0. (2) and (3) are easy by definition of ν - referto the proof of (2.119.(ii)) for this. (4) Consider a and b ∈ R withν(a) ≤ ν(b). If b = 0 then a | b is clear, otherwise we write a = αmi

and b = βmj for i = ν(a) and j = ν(b). By maximality of i we getm |6 α and hence α ∈ R\mR = R∗. Thus we may define h := α−1βmj−i

and thereby obtain ha = b such that a | b. (N) suppose m2 | m,then there was some h ∈ R such that hm2 = m and hence hm = 1,which is m ∈ R∗. But in this case we had R∗ = R \ mR = ∅ - acontradiction. Thus we have ν(m) ≤ 1 which clearly is ν(m) = 1.

• (c) =⇒ (f): Any PID is a noetherian UFD and by (2.55) an UFDis a normal ring. And the maximal ideal m is non-zero, as we wouldelse find R∗ = R \m = R \ 0, which would mean that R was a field.Thus let p i R be any prime ideal of R, if p 6= 0 is non-zero, then palready is maximal, due to (2.65.(i)). But in this case we necessarilyhave p = m, as R was assumed to be local.

• (f) =⇒ (e): First of all m is a finitely generated R-module, as R is anoetherian ring. Thus if we had m = m2 = mm then (as jacR = m)Nakayama’s lemma (3.32) would imply m = 0 - a contradiction. Thuswe have m2 ⊂ m, that is there is some m ∈ m with m 6∈ m2. In whatfollows we will prove m = mR. (1) First note that m 6= 0 (as elsem ∈ m2), thus by assumption and (2.20.(i)) we get

√mR =

⋂ p ∈ specR | mR ⊆ p = m

In particular m ⊆√mR and hence there is some k ∈ N such that

mk ⊆ mR, by (2.20.(vii)). And of course we may choose k minimalwith this property (that is mk−1 ⊆/ mR). Clearly k = 0 cannot occur(as m ∈ m and hence m is not a unit of R). And if k = 1 we are done,as in this case we get m ⊆ mR ⊆ m. Thus in the following we willassume k ≥ 2 and derive a contradiction. (2) by minimality of k thereis some a ∈ mk−1 with a 6∈ mR. Let am := ab | b ∈ m, as a ∈ mk−1

we have am ⊆ mk−1m = mk ⊆ mR. Now let x := a/m ∈ E, whereE := quotR is the quotient field of R. Then clearly x 6∈ R (that ism |6 a) as else a ∈ mR in contradiction to the choice of a. (3) Nowit is easy to see that xm := xb | b ∈ m is a subset xm ⊆ R of R:for any b ∈ m we have xb = ab/m ∈ (1/m)mR = R. And in fact xmeven is an ideal xm i R of R (0 ∈ xm is clear, and obviously xm isclosed under +, − and multiplication with elements of R). Now byconstruction we even get xm 6= R (as else there would be some b ∈ msuch that xb = 1. But this would imply m = ab ∈ am ⊆ mk ⊆ m2

(as we assumed k ≥ 2) in contradiction to the choice of m). Thusxm is a proper ideal of R and as R is a local ring with maximal idealm this implies xm ⊆ m. (4) Let us now choose generators of m, say

475

Page 476: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

m = b1R+ · · ·+bnR. As xm ⊆ m there are coefficients ai,j ∈ R (wherej ∈ 1 . . . n) such that for any i ∈ 1 . . . n

xbi = ai,1b1 + · · ·+ ai,nbn

Let us now define b := (b1, . . . , bn) and the n × n matrix A := (ai,j)over R. Then the above equations can be reformulated as Ab = xb,that is b ∈ kn(x 11 − A). Thereby (x 11 − A) is a n× n matrix over thefield E, and as it has a kernel we have det(x 11 − A) = 0 ∈ E. Thus iff := det(t 11 − A) ∈ R[t] is the characteristic polynomial of A then wehave f(x) = det(x 11 − A) = 0. But as R was assumed to be normaland as f is a normed polynomial over R, this implies x = a/m ∈ R orin other words a ∈ mR, a contradiction. This contradiction can onlybe solved in case k = 1 and then we have already seen m = mR.

• (e) =⇒ (c): First of all R is not a field, as m is a proper non-trivialideal of R (that is 0 6= m 6= R). It remains to verify, that R is a PID,where m = mR was assumed to be a principal ideal. To do this wefirst let z :=

⋂k m

k i R, as R is noetherian z is a finitely generatedR-module. And further we have mz = z (”⊆ ” is clear and converselyif a ∈ z then for any k ∈ N there is some hk ∈ R such that a = hkm

k.Therefore a/m = hk+1m

k for any k ∈ N and hence a/m ∈ z which isa ∈ mz again). As m = jacR the lemma of Nakayama (3.32) impliesthat z = 0. We now define the following map

ν : i R→ N ∪ ∞ : a 7→ sup k ∈ N | a ⊆ mk

Obviously ν is well defined, as a ⊆ R = m0 for any a i R. And ifa 6= R, then a is contained in some maximal ideal, which is a ⊆ m andhence ν(a) ≥ 1. Now suppose ν(a) = ∞, this means a ⊆

⋂k m

k = 0and hence a = 0. Of course both 0 and R are principal ideals. Thusconsider any ideal a i R with 0 6= a 6= R. As we have just seen, thisyields 1 ≤ k := ν(a) ∈ N. And as a ⊆/ mk+1 we may choose somea ∈ a with a 6∈ mk+1. As a ⊆ mk we in particular have a ∈ mk, thatis a = αmk for some α ∈ R. If we had α ∈ m then a ∈ mk+1 wouldyield a contradiction. Hence α ∈ R \ m = R∗ is a unit. Therefore wefound mk = mkR = αmkR = aR ⊆ a ⊆ mk and hence a = mkR is aprincipal ideal. Altogether R is a PID.

2

Proof of (2.126):

(i) By definition of a local ring it suffices to check R \R∗ i R. As R 6= 0we have 0 ∈ R \R∗. Now consider any a, b ∈ R \R∗ and r ∈ R. Thenra ∈ R \ R∗, as else a−1 = r(ar)−1 and in particular −a ∈ R \ R∗. Itremains to show that a+b ∈ R\R∗. Without loss of generality we mayassume a | b (else we may interchange a and b). That is b = ab forsome h ∈ R and hence a+ b = a(1 + h). Thus we have a+ b ∈ R \R∗,as else a−1 = (1 + h)(a+ b)−1.

(i) By assumption R is an integral domain. Thus regard any finitelygenerated ideal a i R. That is there are a1, . . . , ak ∈ R (where

476 18 Proofs - Commutative Algebra

Page 477: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

we may choose k minimally) such that a = a1R + · · · + akR. Weneed to show that a is a principal ideal, that is k = 1. Thus assumek 6= 1, then a 6= a1R (as k is minimal) and hence there is someb ∈ a such that b 6∈ a1R. Now write b = a1b1 + · · · + akbk and letd := a2b2 + · · · + akbk. If we had a1 | d then d ∈ a1R and henceb = a1b1 + d ∈ a1R, a contradiction. Thus we have d | a1 as R isa valuation ring. But this means a1 ∈ dR ⊆ a2R + · · · + akR andhence a = a1R+ · · ·+ akR = a2R+ · · ·+ akR in contradiction to theminimality of k. This only leaves k = 1 which had to be shown.

(ii) (c) =⇒ (b) is true by (2.121.(iii)) and (b) =⇒ (a) is clear. Thus itremains to verify (a) =⇒ (c): as R is a valuation ring, it is a Bezoutdomain, by (i). Thus if R also is noetherian it clearly is a PID. But Ralso is local by (i) and a local PID is a field or a DVR by (2.123).

2

Proof of (2.129):It is clear that f + g, f ∩ g and f g are R-submodules again. If now rf ⊆ Rand sg ⊆ R then we can also check property (2) of fraction ideals

(rs)(f + g) = s(rf) + r(sg) ⊆ R

r(f ∩ g) ⊆ rf ⊆ R

(rs)(f g) = (rf) (sg) ⊆ R

Further f : g is an R-submodule of F since: if x, y ∈ f : g and a ∈ Rthen (ax + y)g ⊆ af + f = f. Thus it remains to prove that f : g f Rtruly is a fraction ideal. To do this let rf ⊆ R and sg ⊆ R. As g 6= 0we may choose some 0 6= g ∈ g, then clearly sg ∈ g ∩ R and sg 6= 0, as Ris an integral domain. But now (rsg)(f : g) ⊆ R as for any x ∈ f : g weget(rsg)x = r(xsg) ∈ rf ⊆ R.

2

Proof of (2.131):(b) =⇒ (a) is trivial, by choosing g := R : f. Thus consider (a) =⇒ (b):that is we assume f g = R for some g f R. Then by construction we haveg ⊆ x ∈ F | x f ⊆ R = R : f. Hence R = f g ⊆ f (R : f) ⊆ R (the latterinclusion has been shown in the remark above) which is R = f (R : f). Itremains to verify the uniqueness of the inverse g - but this is simply due tothe associativity of the multiplication. We start with f g = R = f (R : f).Multiplying (from the left) with g yields g = R g = (f g) g = (g f) g =g (f g) = g (f (R : f)) = (g f) (R : f) = (f g) (R : f) = R (R : f) = R : f.

2

Proof of (2.132):(a) =⇒ (b): by assumption (a) we’ve got 1 ∈ R = a g. Hence there aresome ai ∈ a and xi ∈ g such that 1 = a1x1 + . . . anxn ∈ a g. And as thexi are contained in xi ∈ g = a−1 = R : a we have xia ∈ R as claimed.(b) =⇒ (a): conversely suppose we are given such ai and xi then bydefinition ai ∈ a and xi ∈ R : a. And as a1x1 + . . . anxn = 1 we also we have

477

Page 478: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

R ⊆ a(R : a) ⊆ R and hence R = a (R : a). Thus we next concern ourselveswith the implication (b) =⇒ (c): suppose we have chosen some ai ∈ a andxi ∈ F as noted under (b), then we define two R-module homomorphisms

ψ : Rn → a : (r1, . . . , rn) 7→ a1r1 + . . . anrn

ψ′ : a→ Rn : a 7→ (ax1, . . . , axn)

Note that the latter is well-defined, since xia ⊆ R by assumption. But ifnow a ∈ a is chosen arbitarily then we get

ψψ′(a) = aa1x1 + · · ·+ aanxn = a

as a1x1 + · · · + anxn = 1. This is ψψ′ = 11 on a. Now regard the following(obviously exact) chain of R-modules

0 −→ knψ⊆−→ Rn

ψ−→ a −→ 0

As we have just seen, this chain splits, as ψψ′ = 11 and hence a is a directsummand of the free R-module Rn (which in particular means that a isprojective) by virtue of

Rn ∼=m a⊕ knψ : r 7→(ψ(r), r − ψ′ψ(r)

)Conversely suppose (c), that is M = a⊕P , then we need to prove (b). To dothis we define the R-module homomorphisms % : M a : (a, p) 7→ a andι : a → M : a 7→ (a, 0). Then clearly % ι = 11 . As M is free by assumptionit has an R-basis mi | i ∈ I . And hence any a ∈ a has a representationin the form

a = %ι(a) = %

(∑i∈I

aimi

)=

∑i∈I

ai%(mi)

Note that only finitely many of the ai are non-zero, as these are the coef-ficients of the basis representation of ι(a). This representation allows us todefine another R-module homomorphism

πi : a→ R : a 7→ ai

Now consider any two non-zero elements 0 6= a, b ∈ a then we see thatbπi(a) = πi(ba) = πi(ab) = aπi(b). And hence we may define an elementxi ∈ F independently of the choice of 0 6= a ∈ a by letting

xi :=πi(a)

a

Note that only finitely many ai were non-zero and hence only finitely manyxi are non-zero. The one property is easy to see: if 0 6= a ∈ a is any elementthen we get xia = πi(a) ∈ R and hence xia ⊆ R. The other property

478 18 Proofs - Commutative Algebra

Page 479: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

requires only slightly more effort: just compute

a =∑i∈I

πi(a)%(mi) =∑i∈I

axi%(mi) = a

(∑i∈I

xi%(mi)

)

Recall that only finitely many xi were non-zero and for these let us denotehi := %(mi). As a was non-zero we may divide by a to see that 1 =

∑i xihi

which was all that remained to prove.

2

Proof of (2.133):

• (a) =⇒ (b): as a 6= 0 is non-zero a 6= 0 is non-zero and hence we maychoose g := (1/a)R. Then we clearly get a g = (aR)(1/aR) = R.

• (b) =⇒ (a): as a is invertible, it is a projective R-module. But R isa local ring and hence any projective R-module already is free. Nowa being free implies that a is a principal ideal (if the basis containedmore than one element - say b1 and b2 - then b2b1 + (−b1)b2 = 0 wouldbe a non-trivial linear combination).

2

Proof of (2.135):We will now prove the equivalencies in the definition of Dedekind domains.Amongother useful statements we will thereby proof (2.137.(i)) already.Note that the order of proofs is somewhat unusual: (d) ⇐⇒ (e) followedby (e) =⇒ (a) =⇒ (b) =⇒ (c) =⇒ (b) =⇒ (d).

• (d) =⇒ (e): consider any prime ideal q i R and let U := R \ q,that is Rq = U−1R. Then Rq is a local ring by (2.116). And as Rwas assumed to be normal, so is Rq, by (2.110). And by (2.108) thespectrum of Rq corresponds to p ∈ specR | p ⊆ q . Yet if p 6= 0,then p already is maximal by assumption and hence p = q. Thus wehave p ∈ specR | p ⊆ q = 0, q . Thereby specRq has preciselythe prime ideals 0 (corresponding to 0 in R) and its maximal ideal(corresponding to q) inR. Altogether it satisfies condition (f) of DVRs.

• (e) =⇒ (d): by assumption R is a noetherian integral domain. Thus itremains to prove, that it is normal and satisfies specR = smaxR∩0.By assumption any localisation Rp is a DVR, hence a PID, hence anUFD and hence normal by (2.55). And as R is an integral domainthis implies that R itself is normal, too by (2.111). Thus consider anyprime ideal 0 6= p i R. It remains to prove that p is maximal. Thuswe choose any maximal ideal ideal m with p ⊆ m. By assumption Rm

is a DVR, and hence has precisely the prime ideals 0 and mm (2.123).But as 0 6= p ⊆ m by (2.108) the localisation pm is a non-zero primeideal of Rm and hence pm = mm. Invoking the correspondence againwe find p = m and hence p truly is maximal.

• R DVR =⇒ (a): let m be the maximal ideal ofR andm a uniformizingparameter, that is m = mR. (1) If now 0 6= f fR is a non-zero fraction

479

Page 480: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

ideal, then a := f ∩ R i R is an ideal of R. And by definition thereis some 0 6= r ∈ R such that rf ⊆ R. Recall that by (2.121.(iv)) ina DVR r ∈ R is uniquely represented, as r = %mk with % ∈ R∗ andk = ν(r) ∈ N. Among those r ∈ R with rf ⊆ R we now choose onewith minimal k = ν(r). (2) We first note that a 6= 0, just choose some0 6= x ∈ f, then rx ∈ a and as r, x 6= 0 we also have rx 6= 0. Thus by(2.121.(v)) a is of the form a = mnR for some n ∈ N. (3) Now

f = mn−k R

”⊆ ” if x ∈ f then by (2) we have rx ∈ a = mnR, say rx = mnp.Therefore x = mnp/r = %−1pmn−k ∈ mn−kR. ” ⊇ ” If k = 0, thenr = % ∈ R∗ is a unit and hence f = rf ⊆ R. Thus we get f =a = mnR = mn−kR. Now suppose k ≥ 1, as k has been chosenminimally, there is some x = a/b ∈ f such that mk−1x 6∈ R. That isb |6 mk−1a and hence ν(b) > ν(mk−1a) = k−1 +ν(a) by the propertiesof discrete valuations. However rx ∈ R and hence b | mka. Thisyields ν(b) ≤ ν(amk) = k + ν(a) altogether ν(b) = k + ν(a). Thusif we represent a = αmi with α ∈ R∗ and i = ν(a) then b = βmj

with β ∈ R∗ and j = ν(b) = k + i. Therefore x = a/b = αβ−1m−k

such that rx = α%β−1 ∈ R∗. On the other hand rx ∈ a = mnRand hence a = R, that is n = 0. Thus m−k = βα−1x ∈ f such thatmn−kR = m−kR ⊆ f. This settles the proof of the equality (3). Butnow we have seen, that f = mn−kR is a principal ideal, in particular

it is invertible, with inverse f−1= mk−nR.

• (e) =⇒ (a): if R is a field, then f = R is the one and only non-zerofraction ideal of R. And f = R trivially is invertible, with inverse

f−1= R. Thus in the following we assume that R is not a field, then

we choose any non-zero prime ideal 0 6= p i R in R and let U := R\p.That is U−1R = Rp and by assumption Rp is a DVR. Thus considera non-zero fraction ideal 0 6= f f R, then 0 6= U−1f f Rp is afraction ideal of Rp. But as we have just seen this implies, that U−1Ris invertible (regarding Rp). But by the remarks in section 2.11 thisalready yields that f is invertible (regarding R).

• (a) =⇒ (b) is trivial, so we will now prove (b) =⇒ (c): by assump-tion (b) any non-zero ideal is invertible and hence finitely generated(by the remarks in section 2.11). And as 0 trivially is finitely generatedR is a noetherian ring. Now consider any non-trivial ideal a i R,that is a 6∈ 0, R and let a0 := a. As a 6= R we may choose someprime ideal p1 ∈ specR containing a ⊆ p1, by (2.4.(ii)). Now let

a1 := a0(p1)−1 = a0(R : p1) ⊆ a0(R : a0) = R

Here the inclusion R : p1 ⊆ R : a0 holds because of a0 ⊆ p1 anda0(R : a0) = R is true because a0 is invertible by assumption (a).Hence a1 i R is an ideal of R and by construction

a = a0 = p1a1

We will now construct an ascending chain a = a0 ⊆ a1 ⊆ . . . ⊆ an . . .

480 18 Proofs - Commutative Algebra

Page 481: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

of ideals of R. Suppose we have already constructed ak i R and theprime ideals p1, . . . , pk ∈ specR such that a = p1 . . . pkak. If ak = Rthen we are done as we have already decomposed a = p1 . . . pk in thiscase. Thus assume ak 6= R. Then we may choose any prime idealpk+1 containing ak ⊆ pk+1 again. As for the case k = 1 above, we letak+1 := ak(pk+1)−1 and find that ak+1 i R is an ideal of R. Then byconstruction we get

a = p1 . . . pk ak = p1 . . . pk pk+1 ak+1

As R is noetherian this chain stabilizes at some point an = an+1 and byconstruction this means an+1 = an = pn+1an+1. As an+1 is a finitelygenerated R-module the lemma of Dedekind implies that there is somep ∈ pn+1 such that (1 − p)an+1 = 0. But as R is an integral domainand an+1 6= 0 this means that p = 1, which is absurd, since pn+1 6= Ris prime. Thus the construction fails at this stage, which can only beif an = R. And as we have mentioned already this is a = p1 . . . pn.

• (b) =⇒ (2.137.(i)): the existence of the factorisation of non-trivialideals into prime non-zero ideals has been shown in (b) =⇒ (c)already. Hence it only remains to show that the factorisation is uniqueup to permutations. Thus assume p1 . . . pm = q1 . . . qn where 0 6= piand qj i R are non-zero prime ideals. Whithout loss of generality weassume m ≤ n and use induction on n. The case m = 1 = n is trivialso we are only concerned with the induction step: by renumbering wemay assume that q1 is minimal among q1, . . . , qn . And as

p1 . . . pm ⊆m⋂i=1

pi ⊆ q1

and q1 is prime (2.11.(ii)) implies that there is some i ∈ 1 . . .m suchthat pi ⊆ q1. Now conversely as pi is prime and q1 . . . qn ⊆ pi thereis some j ∈ 1 . . . n such that qj ⊆ pi ⊆ q1. By minimality of q1 thismeans qj = q1 and hence pi = q1. By renumbering we may assume

i = 1. Then multiplying by p−11 = q−1

1 yields p2 . . . pm = q2 . . . qn sowe are done by the induction hypothesis.

• (c) =⇒ any invertible prime ideal is maximal. Thus consider someinvertible prime ideal p i R, in particular p 6= 0. We will show thatfor any u ∈ R \ p we get p + uR = R. Suppose we had p + uR 6= Rthen by (b) we could decompose p + uR = p1 . . . pm for some primeideals pi i R. Now take to the quotient ring R/p, then by (1.61)

0 6= (u+ p)R/p = p + uR/

p = p1/p . . . pm

/p

But (u + p)(R/p), being a principal ideal, is invertible. In particularany of its factors pi/p is invertible with inverse given to be

(pi/p

)−1=

(1

u+ pR/

p

)∏i 6=j

pj/p

481

Page 482: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

Now we open up a second line of argumentation: we decompose theideal p + u2R = q1 . . . qn and take to the quotient ring again

0 6= (u2 + p)R/p = p + u2R/

p = q1/p . . . qn

/p

As ((u+ p)R/p)2 = (u2 + p)R/p we have found two decompositions of(u2 + p)R/p into prime ideals - to be precise we found(

p1/p

)2. . .

(pm/p

)2= q1

/p . . . qn

/p

R/p also satisfies the assumption (c) because of the correspondencetheorem and as any pi/p is invertible the decomposition is uniqueup to permutations - note that these truly are the only assumptionsthat were used in the proof of (iii). Hence (by the uniqueness) weget 2m = n and (p1/p, p1/p, . . . , pm/p, pm/p) ←→ (q1/p, . . . , qn/p).But by the correspondence theorem (1.61) again we can return to theoriginal ring R getting (p1, p1 . . . , pm, pm) ←→ (q1, . . . , qn). Therefore

p + u2R = q1 . . . qn= p1p1 . . . pmpm= (p + uR)2

= p2 + up + u2R

Hence p ⊆ p + u2R = p2 + u(p + uR) ⊆ p2 + uR. Thus any p ∈ phas a representation as p = a + ub where a ∈ p2 and b ∈ R. Thusub = p − a ∈ p such that b ∈ p. This is p = a + ub ∈ p2 + up and asp has been arbitrary p ⊆ p2 + up = p(p + uR). Multiplying with p−1

this is R ⊆ p + uR and hence finally p + uR = R.

• (c) =⇒ (b): we will first show that non-zero prime ideals 0 6= p i Rare invertible. To do this choose some 0 6= a ∈ p and decomposeaR = p1 . . . pn. As aR is invertible - with inverse (aR)−1 = (1/a)R -any pi is invertible, as its inverse is just(

pi

)−1=

1

aR∏i 6=j

pj

As p is prime and p1 . . . pn = aR ⊆ p there is some i ∈ 1 . . . n suchthat pi ⊆ p. But pi is invertible and hence maximal (see above) suchthat pi = p. Thus p is invertible. If now 0 6= a i R is any non-zeroideal then we decompose a = p1 . . . pn and as any pi is now known tobe irreducible we have a−1 = p−1

1 . . . p−1n invertible.

• (b) ⇐⇒ (c) and (b) =⇒ specR = smaxR ∪ 0 . We have justestablished the equivalence (b) ⇐⇒ (c). And we have seen above that(because of (c)) any invertible prime ideal is maximal. But because of(b) any non-zero prime ideal already is invertible and hence maximal.

• (b) =⇒ (d): we have already seen - in (b) =⇒ (c) - that R isnoetherian and that specR = smaxR ∪ 0 . Thus it only remains toshow that R is normal. If R is a field already then there is nothing to

482 18 Proofs - Commutative Algebra

Page 483: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

prove, else let F := quotR and regard x = a/b ∈ F . If x is integralover R then R[x] is an R-module of finite rank k+ 1 := rankR[x]. Letnow r := bk - as any element of R[x] is of the form akx

k+ · · ·+a1x+a0

(for some ai ∈ R) we clearly have rR[x] ⊆ R (in particular R[x] fR isa fraction ideal of R). And hence for any n ∈ N we have sn := rxn ∈ R.For any prime ideal p i R let us now denote by νp(a) the multiplicityof p in the primary decomposition of a, i.e. we let

νp(p1 . . . pm) := # i ∈ 1 . . .m | p = pi ∈ N

Then clearly νp turns the multiplication of ideals into an addition ofmultiplicities and hence ran = snb

n turns into

νp(rR) + nνp(aR) = νp(snR) + nνp(bR)

Hence for any n ∈ N and any prime ideal p i R we get the estimate

νp(rR) + n(νp(aR)− νp(bR)) ≥ νp(snR) ≥ 0

By choosing n > νp(rR) this implies that for any prime ideal p wehave νp(bR) ≤ νp(aR). Writing down decompositions of aR and bRwe hence find that aR ⊆ bR. But this is b | a or in other wordsx ∈ R which had to be shown.

2

Proof of (2.139):The proof of the equivalencies in the definition (2.138) and the properties(2.139) of valuations of ideals in Dedekind domains is interwoven tightly.Thus we do not attempt to give seperate proofs, but verify the propertiesand equivalencies simultaneously.

• b | a ⇐⇒ a ⊆ b: if a = b c, then b ⊆ a is clear. Conversely supposewe had a ⊆ b. If additionally b = 0 then a = 0, too and hence wemay already take c := R. Thus suppose b 6= 0 and let c := b

−1a f R.

Then c ⊆ b−1

b = R and hence c = c∩R i R. But on the other handb c = b b

−1a = a is clear, such that b | a.

(i) In case m 6∈ M we may pick up m0 := m and k(0) := 0 reducing thisto to the case m ∈ M. And if m ∈ M then by renumbering we may

assume that m = m1. Thus regard a := mk(1)1 . . .mk(n)

n , then a ⊆ mk(1)

is clear and hence νm(a) ≥ k(1), by definition of νm. Thus assumeνm > k(1), this means a ⊆ mk(1)+1 and by what we have just shownthere is some c i R such that a = mk(1)+1c. Therefore

mk(2)1 . . .mk(n)

n = m−k(1)a = mc ⊆ m

But as m is prime this means that there is some i ∈ 2 . . . n such that

mk(i)i ⊆ m. Clearly k(i) 6= 0 as else R ⊆ m and hence mi ⊆ m by the

same argument. But mi has been maximal and hence mi = m = m1 incontradiction to the mi being pairwise distinct.

483

Page 484: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

(ii) Decompose a = mk(1)1 . . .mk(n)

n and b = ml(1)1 . . .ml(n)

n as in the formu-lation of (iii). Also as in the proof of (i) we may also assume thatm = m1 (by picking up m0 := m, k(0) := 0 and l(0) := 0 if necessary

and renumbering). Then it is clear that a b = mk(1)+l()1 . . .mk(n)+l(n)

n .And using (i) the claim follows readily: νm(a b) = k(1) + l(1) =νm(a) + νm(b).

• a | b ⇐⇒ ∀m : νm(b) ≤ νm(a). Clearly if a = b c then by (ii)we get νm(a) = νm(b) + νm(b) ≥ νm(b). Conversely decompose a =

mk(1)1 . . .mk(n)

n and b = ml(1)1 . . .ml(n)

n once more. By (i) that is k(i) =νmi(a) and l(i) = νmi(b). Thus by assumption we have l(i) ≤ k(i)

and may hence define c = mk(1)−l(1)1 . . .mk(n)−l(1)

n i R. Then byconstruction we immediately get b c = a.

(iii) First let m(m) := min νm(a), νm(b) for any m ∈ smaxR. Note thatfor m = mi we in particular get m(m) = m(i) and if m is not containedin the mi then m(m) = 0. And therefore we get

mm(1)1 . . .mm(n)

n =∏m

mm(m)

Now consider any ideal c i R, then by the equivalencies we havealready proved for a | b it is evident that we obtain the followingchain of equivalent statements

∀m : νm(c) ≤ νm(a + b)

⇐⇒ a + b ⊆ c

⇐⇒ a ⊆ c and b ⊆ c

⇐⇒ ∀m : νm(c) ≤ νm(a) and νm(c) ≤ νm(b)

⇐⇒ ∀m : νm(c) ≤ m(m)

Thus for a fixed m ∈ smaxR we may regard c := mj . And by (i)this satisfies νm(c) = j, that is we may take νm(c) to be any j ∈ Nof our liking. And therefore the above equivalency immediately yieldsνm(a + b) = m(m) and hence the equality for a + b claimed.

(iv) Consider any two non-zero ideals a, b of R, then by (iii) a and b arecoprime (i.e. a + b = R) if and only if for any m ∈ smaxR we getνm(a) = 0 or νm(b) = 0. In particular for any m 6= m ∈ smaxR andany k, l ∈ N we see that mk and nl are coprime. Thus the claim followsimmediately from the original chinese remainder theorem (1.92).

(v) If a = aR is a principal ideal, then we may take b := R, then a+b = Rand a b = a = aR are satisfied trivially. Thus suppose a is no principal

ideal (in particular a 6= 0), then we may decompose a = mk(1)1 . . .mk(n)

n

as usual. Now for any i ∈ 1 . . . n pick up some bi ∈ R such that

bi ∈ mk(i)i but bi 6∈ mk(i)+1

i (thereby mk(i)i \mk(i)+1

i 6= ∅ is non-empty, as

we else had R = mi by multiplication with m−k(i)i ). Hence by (iv) there

is some b ∈ R that is mapped to (bi ∈ mk(i)+1i ) under the isomorphism

of the chinese remainder theorem. That is for any i ∈ 1 . . . n we have

b+mk(i)+1i = bi+mk(i)+1

i . In particular we find b ∈ mk(i)i and b 6∈ mk(i)+1

i

484 18 Proofs - Commutative Algebra

Page 485: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

again. This is to say that νmi(b) = ki for any i ∈ 1 . . . n. That is ifwe decompose bR then there are some n ≤ N ∈ N, mi ∈ smaxR and1 ≤ k(i) ∈ N (where i ∈ n+ 1 . . . N) such that

bR =∏m

mk(m) =

N∏i=1

mk(i)i

Note that n < N , if we had n = N then bR = a already would bea principal ideal, which had been dealt with already. Thus if we let

b := mk(n+1)n+1 . . .mk(N)

N , then it is evident, that a b = bR is principal.But also by construction a and b satisfy νm(a) = 0 or νm(b) = 0 for anym ∈ smaxR. And as we have already argued in (iv) this is a + b = R.

2

Proof of (2.137):Note that (i) has already been shown while proving the equivalencies in thedefinition of Dedekind domains on page 479. Thus it only remains to verify(ii) to (v), which we will do now:

(v) Let us denote the mapping (of course we still have to check the well-definedness) given by Φ, that is Φ(a + mk) := a/1 + mkRm. Nowconsider any a, b ∈ R and u, v 6∈ m, then in a first step, we will prove

a

u+ mkRm +

b

v+ mkRm ⇐⇒ av − bu ∈ mk

If av − bu ∈ mk then (as uv 6∈ m) it is clear, that (a/u) − (b/v) =(av − bu)/uv ∈ mkRm. And thereby also a/u+ mkRm + b/v + mkRm.Conversely let us assume (av− bu)/uv = (a/u)− (b/v) ∈ mkRm. Thatis there are some m ∈ mk and w 6∈ m such that (av − bu)/uv = m/wand hence w(av − bu) = muv ∈ mk. Therefore w(av − bu)R ⊆ mk

such that by (2.138) and (2.139)

k = νm(mk) ≤ νm(w(av − bu)) = νm(w) + νm(av − bu)

Yet w 6∈ m and hence νm(w) = 0. Therefore we get νm(av − bu) = kand this translates into av−bu ∈ mk as claimed. And this immediatelyyields the equivalencies: a+mk = b+mk iff a− b ∈ mk iff Φ(a+mk) =Φ(b + mk). And this has been the well-definedness and injectivity ofΦ. Thus it remains to prove the surjectivity of Φ: we are given anya/u + mkRm and need to find some b ∈ R such that a/u + mkRm =b/1 + mkRm. Yet u 6∈ m and m is a maximal ideal, that is R/m is afield and hence there is some v 6∈ m such that v+m = (u+m)−1. Thatis 1 − uv ∈ m and in particular (1 − uv)k ∈ mk. But by the binomialrule we may compute

(1− uv)k = (−1)k(uv − 1)k = (−1)kk∑i=0

(k

i

)(uv)i(−1)k−i

=k∑i=0

(−1)i(k

i

)(uv)i = 1 + u

k∑i=1

(−1)i(k

i

)ui−1vi

485

Page 486: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

That is we get (1− uv)k = 1 + uq for some adequately chosen q ∈ R.Then we define b := −aq, now an easy computation yields a − bu =a(1 + uq) = a(1− uv)k ∈ mk. Thus by the above equivalency we findΦ(b+ mk) = a/u+ mkRm.

(ii) (1) Let us first assume, that a = mk is a power of a maximal idealm i R. If k = 0, then a = R such that R/a = 0 such thatwe have nothing to prove. Thus assume k ≥ 1, then by (v) R/ais isomorphic to Rm/aRm. But as R is a Dedekind domain Rm is aDVR (by (2.135.(e))) and in particular a PID (by (2.123.(c))). There-fore the quotient Rm/aRm is a principal ring (see section 2.6). Thusthe isomorphy shows, that R/a is a principal ring. (2) Now considerany 0 6= a i R. We have already dealt with the case a = R in

(1).(k = 0). Thus suppose a 6= R and decompose a = mk(1)1 . . .mk(n)

n

(with n(i) ≥ 1). Then by the chinese remainder theorem in (2.139)

R/a∼=r

R/mk(1)

1⊕ · · · ⊕R

/mk(n)n

Now every R/mk(i)i is a principal ring due to case (1). But the direct

sum of principal rings is a principal again, due to the some remark insection 2.6. Thus the isomorphy shows, that R/a is a principal ring.

(iii) Consider any a i R and 0 6= a ∈ R, then by (ii) R/aR is a principalring and hence a/aR is a principal ideal - say a/aR = (b+aR)(R/aR)for some b ∈ R. As b+ aR ∈ a/aR it is clear that b ∈ a, in particularaR + bR ⊆ a. Now consider any c ∈ a, then c + aR ∈ a/aR =(b + aR)(R/aR). That is there is some v ∈ R such that c + aR =(b + aR)(v + aR) = bv + aR. Hence c − bv ∈ aR which means thatc−bv = au for some u ∈ R. And as c = au+bv ∈ a has been arbitrarythis also is a ⊆ aR+ bR

(iv) Let us denote the set of all prime ideals that are not principal byP - formally P :=

p ∈ specR | ∃/ p ∈ R : p = pR

. (1) clearly n :=

#P < ∞, as P ⊆ specR \ 0 = smaxR and the maximal spectrumof R is finite by assumption. If we have n = 0 then any prime idealis principal. Thus if a i R is any ideal decompose a = p1 . . . pkwhere pi i R is prime (using (c)). Then pi = piR is principaland hence a = aR is principal, too by letting a := p1 . . . pk. Thus itsuffices to assume n ≥ 1 and to derive a contradiction. (2) Let usdenote b := p1 . . . pn and a := p1 b = p2

1 p2 . . . pn. Then a 6= b, as else

R = b b−1

= a b−1

= p1 - a contradiction (recall that b 6= 0, as R isan integral domain and hence we may apply (b)). (3) Next we claimthat there is some u ∈ p1 such that u 6∈ p2

1, u 6∈ pi for any i ∈ 2 . . . nand p1 = a + uR. In fact a 6= 0, as R is an integral domain and henceR/a is a principal ring by (ii). And as p1/a is an ideal of R/a, there issome u ∈ R such that

p1/a = (u+ a)R

/a = a + uR/

a

By the correspondence theorem we find p1 = a + uR as a ⊆ p1. Thussuppose u ∈ p2

1, then p = a + uR ⊆ p21 ⊆ p1 and hence p1 = p2

1.Dividing by p1 we would find R = p1, a contradiction. And if we

486 18 Proofs - Commutative Algebra

Page 487: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

suppose u ∈ pi then likewise p1 = a + uR ⊆ pi. But as p1 already ismaximal this would imply p1 = pi a contradiction (to i 6= 1). (4) nowwe prove that uR = p1 q1 . . . qm for some prime ideals qj 6∈ P. First ofall uR admits a decomposition uR = q0 . . . qm according to (b). Thenq0 . . . qm = uR ⊆ p1 and hence q0 ⊆ p1 for some j ∈ 0 . . .m, as p1 isprime. Without loss of generality we may assume j = 0. Then q0 ⊆ p1

and this means q0 = p1, as q0 is maximal (it is non-zero, prime). Thuswe have established the decomposition uR = p1 q1 . . . qm, now assumeqj ∈ P for some j ∈ 1 . . .m. If we had qj = pi for some i ∈ 2 . . . nthen u ∈ uR ⊆ p1 in contradiction to (3). And if we had qj = p1 then

u ∈ uR ⊆ p21 in contradiction to (3) again. (5) Thus the qj 6∈ P are

principal ideals, say qj = qjR. Then we let q := q1 . . . qm and therebyfind uR = p1 q1 . . . qm = p1 (qR). Thus uR ⊆ qR, say u = qv for somev ∈ R. Then dividing by qR we find p1 = (u/q)R = vR such thatp1 ∈ P - a contradiciton at last.

2

Proof of (2.65):It remains to prove statement (iv) of (2.65), as parts (i), (ii) and (iii) havealready been proved on page 437. So by assumption every prime ideal pof R is generated by one element p = pR. In particular any prime ideal pis generated by finitely many elements. Thus R is notherian by property(d) in (2.27). Further 0 6= p = pR implies p to be prime by (2.48.(ii))(and as R is an integral domain this also means that p is irreducible dueto (2.48.(v))). In particular any nonzero prime ideal 0 6= p i R containsa prime element p ∈ p (namely p with p = pR). And thus R is an UFDaccording to property (e) in (2.50). And by (??) this implies that R evenis a normal domain. Now regard a non-zero prime ideal 0 6= p = pR i Ragain and suppose p ⊆ a i R for some ideal a of R. If a 6= R then a iscontained in some maximal ideal a ⊆ m i R of R. And as any maximalideal m is prime, by assumption there is some m ∈ R such that m = mR.Now pR = p ⊆ a ⊆ m = mR implies m | p. That is there is some q ∈ Rsuch that qm = p. But as p is irreducible, this implies q ∈ R∗ (as m ∈ R∗would imply m = R). That is p ≈ m and hence p = m such that p = a, aswell. This mean that p already is a maximal ideal of R. Altogether we haveproved, that R is a normal, noetherian domain in which any non-zero primeideal is maximal. But this means that R is a Dedekind domain according to(c) in (2.135). Now consider any ideal a i R. Then due to (b) (2.135) thereare prime ideals p1, . . . , pn iR such that a = p1 . . . pn. By assumption thereare pi ∈ R such that pi = piR and hence a = (p1R) . . . (pnR) = (p1 . . . pn)Ris a principal ideal, too. As a has been arbitrary, this finally means that Ris a PID.

2

Proof of (2.117):”≤” as R is noetherian we may find finitely many m1, . . . ,mk ∈ m such thatm = Rm1 + · · ·+Rmk. We now choose k minimal with this property, that isk = rankR(m). Then it is clear that mi+m2 | i ∈ 1 . . . k is a generating setof m/m2 (given any n+m2 ∈ m/m2 we may choose a1, . . . , ak ∈ R such that

487

Page 488: Abstract Algebra - Nasehpour · Abstract Algebra Rings, Modules, Polynomials, ... linear algebra on in nite dimensional vector-spaces, ... Thus we have chosen an unusual approach:

n = a1m1 + . . . aknk and thereby n+m2 = a1(m1 +m2)+ · · ·+ak(mk+m2)).And thereby we have

dimE

(m/

m2

)≤ #

mi + m2 | i ∈ 1 . . . k

≤ k = rankR(m)

”≥” choose any E-basis mi + m2 | i ∈ 1 . . . k of m/m2, then we let a :=Rm1 + · · · + Rmk. Then by construction we have m = a + m2 (it is clearthat m2 ⊆ m and as any mi ∈ m we also have a ⊆ m, together a+m2 ⊆ m.Conversely if we are given any n ∈ m then we may choose ai + m ∈ E suchthat n + m2 =

∑i(ai + m)(mi + m2) = (

∑i aimi) + m2. Hence we have

n −∑

i aimi ∈ m2 which means n ∈ a + m2). Now remark that jacR = m,as R is a local ring. Further we have m2 = mm and m is a finitely generatedR-module, as R is noetherian. Thus by the lemma of Nakayama (3.32.(??))we find m = a and in particular rankR(m) ≤ k = dimE(m/m2).

2

488 18 Proofs - Commutative Algebra