39
CORE MODEL INDUCTION NOTES Nam Trang [email protected] Department of Mathematics University of North Texas, Denton, TX Abstract The notes contain a fairly complete proof of Steel’s theorem that PFA implies the Axiom of Determinacy (AD) holds in L(R). The set-up is somewhat different (in various places); the goal is to set up a framework that allows one to generalize the proof to other situations and possibly go beyond L(R). We also include exercises that help guide the reader through various other similar core model inductions. 1. Basic notations Set up: the following hypotheses are considered (i) κ is singular strong limit and κ fails. (CMI below κ, in some V Col(ω,γ) for some γ<κ , cof(κ) , and γ ω = γ ). (ii) κ ω = κ and for all α [κ + , (2 κ ) + ], (α) fails. (e.g. if PFA holds, can take κ = ω 2 ). (CMI in V Col(ω,κ) ) (iii) κ is measurable and κ fails. (CMI in V Col(ω,<κ) ). We want to get AD in L(R) from any of the hypotheses above. In these notes, we will work mainly with the set-up of (i). We will include exercises that help the reader work through the core model induction from the hypotheses of (ii) and (iii). 2. Stack of mice Definition 2.1. Let A be a set of ordinals. An A-premouse M is countably iterable if whenever N is countable and elementarily embeds into M, N is ω 1 +1-iterable. Lp(A)= S {M : M is a countably iterable A-premouse such that M is sound and ρ ω (M)= o(A)}. Exercise 2.2. Show that if M, N are countably iterable, sound A-premice and both project to o(A), then either M N or N M. 1

CORE MODEL INDUCTION NOTES - univie.ac.at

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CORE MODEL INDUCTION NOTES - univie.ac.at

CORE MODEL INDUCTION NOTES

Nam Trang

[email protected]

Department of Mathematics

University of North Texas, Denton, TX

Abstract

The notes contain a fairly complete proof of Steel’s theorem that PFA implies the Axiom of

Determinacy (AD) holds in L(R). The set-up is somewhat different (in various places); the goal

is to set up a framework that allows one to generalize the proof to other situations and possibly

go beyond L(R). We also include exercises that help guide the reader through various other

similar core model inductions.

1. Basic notations

Set up: the following hypotheses are considered

(i) κ is singular strong limit and �κ fails. (CMI below κ, in some V Col(ω,γ) for some γ < κ ,

cof(κ) < γ, and γω = γ).

(ii) κω = κ and for all α ∈ [κ+, (2κ)+], �(α) fails. (e.g. if PFA holds, can take κ = ω2). (CMI in

V Col(ω,κ))

(iii) κ is measurable and �κ fails. (CMI in V Col(ω,<κ)).

We want to get AD in L(R) from any of the hypotheses above. In these notes, we will work mainly

with the set-up of (i). We will include exercises that help the reader work through the core model

induction from the hypotheses of (ii) and (iii).

2. Stack of mice

Definition 2.1. Let A be a set of ordinals. An A-premouse M is countably iterable if whenever

N is countable and elementarily embeds into M, N is ω1 + 1-iterable.

Lp(A) =⋃{M : M is a countably iterable A-premouse such that M is sound and ρω(M) =

o(A)}.

Exercise 2.2. Show that ifM,N are countably iterable, sound A-premice and both project to o(A),

then either M�N or N �M.

1

Page 2: CORE MODEL INDUCTION NOTES - univie.ac.at

Lemma 2.3. Suppose κ is as in (i). Let A ⊆ κ. Then cof(Lp(A)) < κ.

Proof. We may as well assume o(A) = κ. Use the Schimmerling-Zeman construction to see that

Lp(A) � �κ. That �κ fails in V implies o(Lp(A)) < κ+. Since κ is singular, cof(Lp(A)) < κ.

Exercise 2.4. (a) Suppose κ is as in (ii) and A ⊆ κ. Then cof(Lp(A)) ≤ κ.

(b) Suppose κ is as in (iii) and A ⊆ κ. Then cof(Lp(A)) < κ.

Remark 2.5. The conclusions in Lemma 2.3 and Exercise 2.4 are actually what we need for our

proof. These are various forms of the failure of covering for Lp.

Can also define the stack Lp+(A) of M such that M is sound, ρω(M) = o(A), M is ω1 + 1-

iterable after collapsing A to ω (and other variations like Lpκ(A) to be the stack ofM such thatMis sound, ρω(M) = o(A), and for any π : N →M such that |N | < ω1 in V [g] where g ⊆ Col(ω,< κ)

is V -generic, N is ω1 + 1-iterable in V [g]). It often comes up in CMI applications beyond L(R)

that we need to know the relationships between the different stacks.

Exercise 2.6. (i) Does the conclusion in Lemma 2.3 and Exercise 2.4 hold for Lp+(A)?

(ii) If κ is as in (iii) and A ⊆ κ. How are the stacks Lp(A), Lp+(A), Lpκ(A) related?

3. Scales in L(R)

Again, we work in an appropriate V [g] where g is defined according to whether we are in case

(i), (ii), or (iii). The coarse mouse-capturing hypothesis (at α) W ∗α says roughly that given a set

A ⊆ R, as soon as a scale on A and ¬A appears in Jα(R), a coarse mouse’s iteration strategy that

term captures the scale appears in Jα(R). We need to analyze how scales appear in L(R) under

appropriate determinacy hypotheses.

Let LST+ = LST∪{R}. So LST+ is just the language of set theory augmented by an extra

constant symbol R. This symbol is intended to be interpreted as R.

Definition 3.1. An ordinal β is critical just in case there is some U ⊆ R such that U,R\U admit

scales in Jβ+1(R) but U admits no scales in Jβ(R).

Definition 3.2. We say that Jα(R) ≺R1 Jβ(R) if for any Σ1 formula ϕ(v) in LST+, for any real x,

Jα(R) � ϕ[x]⇔Jβ(R) � ϕ[x].

Definition 3.3. [α, β] is a Σ1 gap in L(R) if

1. Jα(R) ≺R1 Jβ(R),

2. for any γ < α, Jγ(R) ⊀R1 Jα(R),

3. for any γ > β, Jβ(R) ⊀R1 Jγ(R)

2

Page 3: CORE MODEL INDUCTION NOTES - univie.ac.at

In a core model induction through L(R), it suffices to show W ∗β+1 holds for all critical β to obtain

W ∗α for all α. In the next section, we’ll see how this has anything to do with proving determinacy

in L(R). The scales analysis in [12] gives us the following. If β is critical then β + 1 is critical. If

β is a limit of critical ordinals, then β is critical iff Jβ(R) is not admissible. Suppose β is critical,

then the following are the possibilities:

(A) β = η + 1 for some η.

(B) β is limit of critical ordinals and either

(a) cof(β) = ω or

(b) cof(β) > ω but Jβ(R) is not admissible.

(C) α = sup{η : η is critical} < β and either

(a) [α, β] is a Σ1 gap, or

(b) β − 1 exists and [α, β − 1] is a Σ1 gap.

To prove W ∗β+1, we need to feed truth at the bottom of the Levy hierarchy over Jβ(R) into

mice. In cases (A), (B)(a), ΣJβ(R)1 is a countable union of sets in Jβ(R); to capture Σ

Jβ(R)1 , we just

need to put together countably many mice given by the inductive hypothesis.

Case (B)(b) requires a new mouse operator to capture ΣJβ(R)1 . We will discuss the so-called

diagonal operator later.

Case (C)(a) is the weak gap case and (C)(b) is the strong gap case. We need the following facts

to deal with the gap cases.

Theorem 3.4 (Reflection Theorem, Martin,cf. [2]). Assume W ∗β , where Case (C) holds at β. Then

for any x, y ∈ R, if x ∈ ODγ(y) for some γ < β, then x ∈ ODγ(y) for some γ < α.

Theorem 3.5 (Scales Existence Theorem, [12]). Assume W ∗β and β is as in Case (C). Then

1. every set of reals in Jβ(R) admits a scale whose individual norms are in Jβ(R).

2. letting n be the least such that ρn(Jβ(R)) = R and U any boldface ΣJβ(R)n set of reals, U =⋃

n Un where Un ∈ Jβ(R) for each n.

Definition 3.6 (Self-justifying system). A self-justifying system (sjs) is a countable set A ⊆ ℘(R)

which is closed under complements and scales.

Thus, if W ∗β holds and β is as in case (C) then for any A ∈ Jβ(R), there is a sjs A containing

A. The set coding truth at the bottom of the Levy hierarchy over Jβ(R) which we feed into our

mice will be Σ, an iteration strategy guided by a sjs A for a mouseM with a Woodin cardinal;Mwill be full with respect to mice with iteration strategy in Jα(R). The mice which witness W ∗β+1

will be MΣ,]n for all n. Here we need to relativize core model theory for Σ-mice.

3

Page 4: CORE MODEL INDUCTION NOTES - univie.ac.at

4. Capturing

4.1. Capturing and determinacy

Definition 4.1. Let A ⊆ R, (M,Σ) is a (countable) mouse (pair), and δ a cardinal in M .

(a) (M,Σ) term captures A at δ if there is a term τ ∈MCol(ω,δ) such that whenever i : M → N

is according to N , and g ⊆ Col(ω, i(δ)) is N -generic, then A ∩N [g] = i(τ)g.

(b) (M,Σ) Suslin captures A at δ if there is a pair of trees (T,U) ∈ M such that whenever

i : M → N is according to N , and g ⊆ Col(ω, i(δ)) is N -generic, then A∩N [g] = p[i(T )]N [g] =

RN [g]\p[i(U)].

In the above, Σ is a ω1 + 1-iteration strategy of M . M need not be fine structural.

Without Σ, we say that M term (Suslin) understands A at δ.

Lemma 4.2. Suppose δ0 < δ1 are Woodin cardinals in M and A ⊆ R is Suslin captured by (M,Σ)

at δ1. Then A is determined.

Proof. Let (T,U) witness A is Suslin captured at δ1. Then N � p[T ] = A is homogeneously Suslin.

(Brief proof: by Woodin, T,U are weakly κ-homogeneous for all κ < δ1. By Martin-Steel, A is

κ-homegenous for all κ < δ0).

So N � A is determined. Say τ ∈ N is a winning strategy for player I. We claim that in V , τ

is I’s winning strategy. Suppose y is a play by II defeating τ in V . Let i : M → N come from a

y-genericity iteration according to Σ. Let g ⊆ Col(ω, i(δ0)) be N -generic such that y ∈ N [g]. In

N [g], τ(y) /∈ p[i(T )] so τ(y) ∈ p[i(U)]. By absoluteness of wellfoundedness, N � ∃y τ(y) ∈ p[i(U)].

Contradiction.

Lemma 4.3 (Neeman, [4, 5]). Suppose δ is a Woodin cardinal in M and A ⊆ R is Suslin captured

by (M,Σ) at δ. Then A is determined.

Lemma 4.4 (Neeman). Suppose δ is a Woodin cardinal in M and A ⊆ R is term captured by

(M,Σ) at δ. Then A is determined.

Proof sketch. The lemmata follow from [5, Lemmata 2.2, 2.5]. Exercise: Read the statements of

[5, Lemmata 2.2, 2.5] and relevant discussions to verify this.

Lemma 4.5. Suppose δ0 < δ1 are M -cardinals such that δ1 is a Woodin cardinal in M and

A ⊆ R× R is term captured by (M,Σ) at δ1. Then ∃RA,∀RA are term captured by (M,Σ) at δ0.

Proof. Let τ ∈ M be a Col(ω, δ0) × Col(ω, δ1)-term that captures A. Let σ be a Col(ω, δ0) term

in M defined as: for any p ∈ Col(ω, δ0), any ρ ∈ HMδ+0

,

(p, ρ) ∈ σ ⇔ ∃q ∈ Col(ω, δ1) (p, q) ∃y(y, ρg) ∈ τg×h.

4

Page 5: CORE MODEL INDUCTION NOTES - univie.ac.at

σ is a term for ∃RA. For ∀RA, note that ∀RA = ¬∃R¬A and if M captures some set B by τ then

M captures ¬B by a term easily definable from τ .

Exercise: Show that the term σ works for ∃RA. Note that we need δ1 is Woodin to verify σ

captures ∃RA; this is because for any real x ∈ ∃RA, let y be such that (y, x) ∈ A; we need to iterate

M by Σ to make y generic at δ1, hence the Woodinness of δ1 is needed here.

Example 4.6. Mn term captures Σ1n+1,Π

1n+1 sets at its bottom Woodin cardinal (Why?). By the

above lemmas, if ∀xM]n(x) exists, then all projective sets are determined.

Exercise 4.7 (Hard). Suppose δ is Woodin in M and (M,Σ) Suslin captures A ⊆ R×R at δ. Then

for all η < δ, (M,Σ) Suslin understands ∀RA, ∃RA at η, i.e. there is a tree T for ∀RA (similarly

for ∃RA) in M such that p[T ] ∩M [g] = ∀RA ∩M [g] for g ⊆ Col(ω, η) M -generic.

The following lemmata will be used substantially in the core model induction arguments of the

last section of these notes.

Lemma 4.8 (Term condensation for sjs). Let A be a sjs and N ∈M be transitive models of a large

fragment of ZFC. Suppose {τA : A ∈ A} ∈M is such that for a comeager set C of Col(ω,N)-generic

g over M , for each A ∈ A, τ gA = A∩M [g]. Then for any π : M∗ →M Σ1 elementary that contains

all relevant objects, letting (N∗, σA) = π−1(N, τA), then whenever g ⊆ Col(ω,N∗) is M∗-generic,

for all A ∈ A,

σgA = A ∩M∗[g].

Proof. Fix A ∈ A and (ψn : n < ω) be the scale on A such that the corresponding prewellorderings

(≤n: n < ω) are in A. Let τn ∈ M be the term for the prewellorderings ≤n (i.e. for any G ∈ C,τGn =≤n ∩M [g]). Let φn ∈M be the term for the norm ψn. Let Un ∈M be the term for the n-th

level of the tree associated with these norms, i.e. for all G ∈ C,

UGn = {(x � n, φG0 (x), . . . , φGn (x)) : n < ω ∧ x ∈M [G] ∩A}

It is easy to see that Un ∈ M and is independent of M -generics G (i.e. for any two M -generics

G0, G1, for any ~a, ~a ∈ UG0n iff ~a ∈ UG1

n ). Exercise: Verify this.

Let U be the term for the tree whose n-th level is Un.

Claim 4.9. For any M -generic G, A ∩M [G] ⊆ p[UG] ⊆ A.

Proof. A ∩M [G] ⊆ p[UG] is obvious. Suppose (x, f) ∈ [UG]. Let n < ω and xn ∈ A ∩M [G] such

that x � n = xn � n and (xn, φG0 (xn) = f(0), . . . , φn−1G(x) = f(n− 1)) ∈ UGn . For any i, φGi (xn) is

eventually constant as n→ ω. Since (ψi : i < ω) is a scale on A so x ∈ A. So p[UG] ⊆ A.

By the claim and elementarity, over M∗, the following holds:

∅ Col(ω,N∗) ∀x(x ∈ σA → ∀n(x � n, π−1(φ0(x), . . . , π−1(φn−1(x)))) ∈ π−1(Un)).

5

Page 6: CORE MODEL INDUCTION NOTES - univie.ac.at

Let U∗ be the term for a tree whose levels are π−1(Un). It’s easy to see that for any M∗-generic

G ⊂ Col(ω,N∗), σGA ⊆ p[UG] ⊆ A. Exercise: Verify this.

Run the argument above for ¬A ∈ A. As a result, we get σG¬A ⊆ ¬A. This gives us the desired

conclusion.

Lemma 4.10 (Term capturing for sets in gaps). Let α, β be such that [α, β] is a gap (α could be

β). Suppose Jβ(R) � AD. Let z ∈ N ∩R and N is a ZFC− model. Let Γ = ΣJα(R)1 . Let κ ∈ N be an

N -cardinal such that CΓ((Hκ+)N ) ⊂ N . Let A ⊆ R be OD<βz . Then there is a term τ ∈ NCol(ω,κ)

such that for comeager many h ⊆ Col(ω, κ) which are N -generic,

τh = A ∩N [h].

Exercise: Read the proof of this (cf. [11, Lemma 3.7.5]).

4.2. Capturing hypotheses

Fix a cardinal µ and g ⊆ Col(ω, µ) (or g ⊆ Col(ω,< µ)) (in (i), µ is such that cof(κ) < µ < κ and

µω ⊆ µ; in (ii), µ = κ; in (iii), µ = κ and g ⊆ Col(ω,< µ)).

The following definitions occur in V [g].

Definition 4.11 ((k,A)-Woodin mouse). Let A ⊆ R. (M,Σ) is a (k,A)-Woodin mouse if

(a) M � ZFC and there are δ0 < · · · < δk such that M � “δi is Woodin” for all i ≤ k.

(b) There are trees (T,U) ∈M such that M � T,U are δ+k -absolutely complemented.

(c) For any i : M → N according to Σ, p[i(T )] ⊆ A and p[i(U)] ⊆ R\A.

Exercise 4.12. Show that (M,Σ) above Suslin captures A at δk.

Definition 4.13 (W ∗α). Let A ⊆ R and suppose there are scales ~ϕ, ~ψ on A,¬A such that the

sequences of prewellorders ϕ∗, ψ∗ are in Jα(R). Then for all k < ω, any x ∈ R, there is a pair

(M,Σ) such that

1. (M,Σ) is a (k,A) Woodin mouse.

2. Σ � HC ∈ Jα(R).

Theorem 4.14. W ∗α implies Jα(R) � AD. Furthermore, MC holds in Jα(R).

Proof. Suppose not. Let γ + 1 ≤ α be least such that Jγ+1(R) � ¬AD. So γ + 1 begins a Σ1-gap

as the statement “there is a non-determined set” is Σ1.

Suppose γ is not crtical, then [α, γ] is a strong gap (as in case (C)(b)), where α = sup{β : β <

γ is critical}. Then by Kechris-Woodin theorem, Jγ+1(R) � AD. Contradiction.

So γ is critical. Let U ∈ Jγ+1(R)\Jγ(R) and let (N,Σ) be a (1, U)-coarse Woodin mouse. So

(N,Σ) Suslin captures U at δ1 (the second Woodin of N). This means U is determined.

6

Page 7: CORE MODEL INDUCTION NOTES - univie.ac.at

To any Σ1 formula θ(v) in LST+, we associate formulae θk(v) for k ∈ ω, such that θk(v) is Σk,

and for any γ and any real x,

Jγ+1(R) � θ[x]⇔ ∃k < ω Jγ(R) � θk[x].

Definition 4.15. Suppose θ(v) is a Σ1 formula in LST+ and z is a real. A 〈θ, z〉-witness is a

sound (ω, ω1, ω1 + 1)-iterable z-mouse M in which there are δ0 < · · · < δ9, trees T,U such that Msatisfies

(a) ZFC.

(b) δ0, . . . , δ9 are Woodin.

(c) T,U are δ+9 -absolutely complemented.

(d) For some k, p[T ] is the Σk+3-theory of Jγ(R) (in the language with names for each real), where

γ is least such that Jγ(R) � θk[z].

Remark 4.16. (d) states that p[T ] is a maximal consistent Σk+3-theory of a well-founded model

of V = L(R) + θk[z] + ∀βJβ(R) � ¬θ[z]. This can be expressed by a formula of the form

∀Rx0∃Rx1ψ(x0, x1, (p[T ], p[U ])), (4.1)

where ψ only involves number quantifiers.

Lemma 4.17. If there is a 〈θ, z〉-witness, then L(R) � θ[z].

Proof. We may assume we have a 〈θ, z〉-witness (M,Σ) such that Σ has the Dodd-Jensen property

for compositions of normal trees.

Claim 4.18. For any x ∈ R, let i0 :M→N0, i1 :M→N1 be iterations according to Σ such that

x is generic over both M0,M1 at δl for some l ≤ 9. Then x ∈ p[i0(T )]⇔ x ∈ p[i1(T )].

Proof. Suppose x ∈ p[i0(T )] but x ∈ p[i1(U)]. Execute a simultaneous comparision between N0,N1

and at the same time an x-genericity iteration at the images of δl. This results in a pair of maps

j0 : N0 → P0, j1 : N1 → P1 such that

1. j0 ◦ i0 = j1 ◦ i1 and P0 = P1.

2. x ∈ P0[h] = P1[h] for h ⊆ Col(ω, j0 ◦ i0(δl)) P0-generic.

3. x ∈ p[j0 ◦ i0(T )] ∩ p[j1 ◦ i1(U)].

(3) clearly is a contradiction.

Let A =⋃{p[i(T )]N [g] : i :M→N according to Σ and g ⊆ Col(ω, i(δ0)) in V is N−generic}.

Now use genericity iterations and the formula in 4.1 to show A is Σk+3-theory of a level of L(R)

that satisfies θ[z], so indeed L(R) � θ[z].

7

Page 8: CORE MODEL INDUCTION NOTES - univie.ac.at

In a core model induction, we need to show the converse of Lemma 4.17 holds (for α limit,

ordinal). The following is the more precise statement.

Definition 4.19 (Wα). Suppose θ is Σ1, z ∈ R and Jα(R) � θ[z], then there is a 〈θ, z〉 witness Mwith a strategy Σ such that Σ � HC ∈ Jα(R).

The following uses the notions of gaps in L(R).

Lemma 4.20. Suppose α is a limit ordinal. W ∗α implies Wα.

Proof. Let β < α (using α limit) be least such that Jβ+1(R) � θ[z]. So β ends a Σ1-gap (and β + 1

begins a new gap). Using W ∗α applied to the set A = ΣJβ+2(R)3 . Let M,Σ be a (9, A)-Woodin mouse

and z ∈ M and the trees (T,U) as part of the witness. Note that A,¬A have scales in Jα(R) by

the scales analysis.

Note that every ODβ+1(VMδ9

) subset of VMδ9

is in M . Exercise: Verify this. Note the following

two facts: (i) the relation y ∈ ODβ+1(x) is ΣJβ+2(R)1 because this is equivalent to saying ∃M(y ∈

M∧ Jβ+2(R) � “M is an ω1-iterable x-mouse”; (ii) being ODβ+1-full is ΠJβ+2(R)1 ).

The above fact is recorded in p[T ]. And this implies for club many π : H → VMλ for λ > δ9,

where crt(π) = η < δ9, for every b ∈ ODβ+1(VMη ) ∩ ℘(VM

η ), b ∈ H.

Let N = L[E, z] be the result of the full-background construction in M and let η be a cardinal

of N in the club above and N |η is definable over M |η. So η is not Woodin in N . Let Q�N be the

first level of N such that N |η � Q ∧ Q /∈ ODβ+1(N |η). Q exists because η is Woodin in N with

respect to all functions in ODβ+1(N |η).1 η is a cutpoint Woodin of Q.

Let g ⊆ Col(ω, η) be Q-generic and x codes g (we may assume z ≤T x). We can rearrange Q[g]

into an x-mouse R. Let R∗ = ODβ+1(x) ∩ R. Note that RR = R∗; this is because R is the first

x-mouse that does not have iteration strategy in Jβ+2(R). There is a unique β∗ ∈ R and a Σ1 map

j : Jβ∗+2(R∗)→ Jβ+2(R).

This comes from the fact that every set in Jβ+2(R) has a scale in Jβ+2(R) and every ODβ+1(x)

relation in Jβ+2(R) has a uniformization in Jβ+2(R). If Γ = ODβ+1, then (R∗, B ∩ CΓ(x)) ≺Σ1

(R, B) for any B ∈ ODβ+1(x); the point is every ODβ+1(x) set of reals has a member u such that

{u} ∈ CΓ(x). Applying this for any B ∈ ODβ+1(x) coding a k-th reduct, giving a Σ1-embedding

into the k-th reduct, which in turns lifts to a Σk-embedding from Jβ∗+1(R∗)→ Jβ+1(R).

Let k be such that Jβ(R) � θk[z] and T ∗ be the Σk+3-theory of Jβ(R). T ∗,¬T ∗ are ΣJβ+1(R)1 (z)

and have scales which are ΣJβ+1(R)1 (z). So there are ODβ+1(z) trees U0, U1 associated with these

scales definable over R from z. They are in Q by homogeneity. Let j(U∗0 , U∗1 ) = (U0, U1). Then

U∗0, U∗1 are OD(z) in Jβ∗+1(R∗). So U∗0 , U∗1 ∈ Q by homogeneity. So Q easily gives a 〈θ, z〉-witness

(as witnessed by U∗0 , U∗1 ).

1If f ∈ ODβ+1(M |η) witnessing η is not Woodin in M , then we can obtain a function g ∈ ODβ+1(V Nη ) witnessingη is not Woodin in N ; we use the fact that N |η is the η-th model of the L[E, z] construction and N |η is definableover VMη . This contradicts the definition of η.

8

Page 9: CORE MODEL INDUCTION NOTES - univie.ac.at

Corollary 4.21. Assume Wα holds. If x, y ∈ R and y is OD(x) in Jγ(R) for some γ < α, then

there is a x-mouse M such that y ∈M and Jα(R) �M is iterable.

Proof. Exercise.

5. Mouse operators, strategy mice

This section summarizes basic definitions and facts about mouse operators and strategy mice. The

notions are developed in details in [16], in which we carefully explain why we define mouse operators

and strategy mice in a certain way and explain errors and short-comings of similar notions defined

in various literature. The reader may ignore the content of this section on the first read. For those

interested in core model inductions beyond L(R), the content of this section may prove useful. For

those just interested in L(R), it is still useful to read this section to see the difficulty with defining

the correct notions of mouse operators and strategy mice and the related condensation properties.

These operators F and their condensation properties are defined in certain ways to ensure that

various backgrounded constructions relative to F can be shown to converge.

5.0.1. F-PREMICE

Definition 5.1. Let L0 be the language of set theory expanded by unary predicate symbols E, B, S,

and constant symbols a, P. Let L−0 = L0\{E, B}.Let a be transitive. Let % : a → rank(a) be the rank function. We write a = trancl({(a, %)}).

Let P ∈ J1(a).

A J -structure over a (with parameter P) (for L0) is a structure M for L0 such that

aM = a, (PM = P), and there is λ ∈ [1,Ord) such that |M| = J SMλ (a).

Here we also let l(M) denote λ, the length of M, and let aM denote a.

For α ∈ [1, λ] let Mα = J SMα (a). We say that M is acceptable iff for each α < λ and

τ < o(Mα), if

P(τ<ω × a<ω) ∩Mα 6= P(τ<ω × a<ω) ∩Mα+1,

then there is a surjection τ<ω × a<ω →Mα in Mα+1.

A J -structure (for L0) is a J -structure over a, for some a.

As all J -structures we consider will be for L0, we will omit the phrase “for L0”. We also often

omit the phrase “with parameter P”. Note that ifM is a J -structure over a then |M| is transtive

and rud-closed, a ∈ M and o ∩M = rank(M). This last point is because we construct from a

instead of a.

F-premice will be J -structures of the following form.

Definition 5.2. A J -model over a (with parameter P) is an acceptable J -structure over a

(with parameter P), of the form

M = (M ;E,B, S, a,P)

9

Page 10: CORE MODEL INDUCTION NOTES - univie.ac.at

where EM = E, etc, and letting λ = l(M), the following hold.

1. M is amenable.

2. S = 〈Sξ | ξ ∈ [1, λ)〉 is a sequence of J -models over a (with parameter P).

3. For each ξ ∈ [1, λ), SSξ = S � ξ and Mξ = |Sξ|.

4. Suppose E 6= ∅. Then B = ∅ and there is an extender F over M which is a × γ-complete

for all γ < crit(F ) and such that the premouse axioms [17, Definition 2.2.1] hold for (M, F ),

and E codes F ∪ {G} where: (i) F ⊆ M is the amenable code for F (as in [15]); and (ii) if

F is not type 2 then G = ∅, and otherwise G is the “longest” non-type Z proper segment of

F in M.2

Our notion of a “J -model over a” is a bit different from the notion of “model with parameter

a” in [11] or [17, Definition 2.1.1] in that we build into our notion some fine structure and we do

not have the predicate l used in [17, Definition 2.1.1]. Note that with notation as above, if λ is a

successor ordinal then M = J(SMλ−1), and otherwise, M =⋃α<λ |Sα|. The predicate B will be used

to code extra information (like a (partial) branch of a tree in M).

Definition 5.3. Let M be a J -model over a (with parameter P). Let EM denote EM, etc. Let

λ = l(M), SM0 = a, SMλ =M, and M|ξ = SMξ for all ξ ≤ λ. An (initial) segment of M is just

a structure of the form M|ξ for some ξ ∈ [1, λ]. We write P EM iff P is a segment of M, and

P /M iff P EM and P 6=M. Let M||ξ be the structure having the same universe and predicates

asM|ξ, except that EM||ξ = ∅. We say thatM is E-active iff EM 6= ∅, and B-active iff BM 6= ∅.Active means either E-active or B-active; E-passive means not E-active; B-passive means not

B-active; and passive means not active.

Given a J -model M1 over b and a J -model M2 over M1, we write M2 ↓ b for the J -model

M over b, such that M is “M1 M2”. That is, |M| = |M2|, aM = b, EM = EM2, BM = BM2,

and P /M iff P EM1 or there is Q /M2 such that P = Q ↓ b, when such an M exists. Existence

depends on whether the J -structure M is acceptable.

In the following, the variable i should be interpreted as follows. When i = 0, we ignore history,

and so P is treated as a coarse object when determining F(0,P). When i = 1 we respect the

history (given it exists).

Definition 5.4. An operator F with domain D is a function with domain D, such that for

some cone C = CF , possibly self-wellordered (sword)3, D is the set of pairs (i,X) such that either:

• i = 0 and X ∈ C, or

2We use G explicitly, instead of the code γM used for G in [3, Section 2], because G does not depend on which (ifthere is any) wellorder of M we use. This ensures that certain pure mouse operators are forgetful.

3C is a cone if there are a cardinal κ and a transitive set a ∈ Hκ such that C is the set of b ∈ Hκ such thata ∈ L1(b); a is called the base of the cone. A set a is self-wellordered if there is a well-ordering of a in L1(a). A setC is a self-wellordered cone if C is the restriction of a cone C′ to its own self-wellordered elements

10

Page 11: CORE MODEL INDUCTION NOTES - univie.ac.at

• i = 1 and X is a J -model over X1 ∈ C,

and for each (i,X) ∈ D, F(i,X) is a J -model over X such that for each P E F(i,X), P is fully

sound. (Note that P is a J -model over X, so soundness is in this sense.)

Let F , D be as above. We say F is forgetful iff F(0, X) = F(1, X) whenever (0, X), (1, X) ∈ D,

and whenever X is a J -model over X1, and X1 is a J -model over X2 ∈ C, we have F(1, X) =

F(1, X ↓ X2). Otherwise we say F is historical. Even when F is historical, we often just write

F(X) instead of F(i,X) when the nature of F is clear from the context. We say F is basic iff for

all (i,X) ∈ D and P E F(i,X), we have EP = ∅. We say F is projecting iff for all (i,X) ∈ D,

we have ρF(i,X)ω = X.

Here are some illustrations. Strategy operators (to be explained in more detail later) are basic,

and as usually defined, projecting and historical. Suppose we have an iteration strategy Σ and we

want to build a J -model N (over some a) that codes a fragment of Σ via its predicate B. We feed

Σ into N by always providing b = Σ(T ), for the <-N -least tree T for which this information is

required. So given a reasonably closed level P �N , the choice of which tree T should be processed

next will usually depend on the information regarding Σ already encoded in P (its history). Using

an operator F to build N , then F(i,P) will be a structure extending P and over which b = Σ(T )

is encoded. The variable i should be interpreted as follows. When i = 1, we respect the history of

P when selecting T . When i = 0 we ignore history when selecting T . The operator F(X) = X#

is forgetful and projecting, and not basic; here F(X) = F(0, X).

Definition 5.5. For any P and any ordinal α ≥ 1, the operator Jmα ( · ;P ) is defined as follows.4

For X such that P ∈ J1(X), let Jmα (X;P ) be the J -modelM over X, with parameter P , such that

|M| = Jα(X) and for each β ∈ [1, α], M|β is passive. Clearly Jmα ( · ;P ) is basic and forgetful. If

P = ∅ or we wish to supress P , we just write Jmα ( · ).

Definition 5.6 (Potential F-premouse, CF ). Let F be an operator with domain D of self-wellordered

sets. Let b ∈ CF , so there is a well-ordering of b in L1[b]. A potential F-premouse over b is an

acceptable J -model M over b such that there is an ordinal ι > 0 and an increasing, closed sequence

〈ζα〉α≤ι of ordinals such that for each α ≤ ι, we have:

1. 0 = ζ0 ≤ ζα ≤ ζι = l(M) (so M|ζ0 = b and M|ζι =M).

2. If 1 < ι then M|ζ1 = F(0, b).

3. If 1 = ι then M E F(0, b).

4. If 1 < α+ 1 < ι then M|ζα+1 = F(1,M|ζα) ↓ b.

5. If 1 < α+ 1 = ι, then M E F(1,M|ζα) ↓ b.

6. Suppose α is a limit. Then M|ζα is B-passive, and if E-active, then crit(EM|ζα) > rank(b).

4The “m” is for “model”.

11

Page 12: CORE MODEL INDUCTION NOTES - univie.ac.at

We say that M is (F-)whole iff ι is a limit or else, ι = α+ 1 and M = F(M|ζα) ↓ b.A (potential) F-premouse is a (potential) F-premouse over b, for some b.

Definition 5.7. Let F be an operator and b ∈ CF . Let N be a whole F-premouse over b. A

potential continuing F-premouse over N is a J -model M over N such that M ↓ b is a

potential F-premouse over b. (Therefore N is a whole strong cutpoint of M.)

We say that M (as above) is whole iff M ↓ b is whole.

A (potential) continuing F-premouse is a (potential) continuing F-premouse over b, for

some b.

Definition 5.8. LpF (a) denotes the stack of all countably F-iterable F-premice M over a such

that M is fully sound and projects to a.5

Let N be a whole F-premouse over b, for b ∈ CF . Then LpF+(N ) denotes the stack of all

countably F-iterable (above o(N )) continuing F-premice M over N such that M ↓ b is fully sound

and projects to N .

We say that F is uniformly Σ1 iff there are Σ1 formulas ϕ1 and ϕ2 in L−0 such that whenever

M is a (continuing) F-premouse, then the set of whole proper segments of M is defined over Mby ϕ1 (ϕ2). For such an operator F , let ϕFwh denote the least such ϕ1.

Definition 5.9 (Mouse operator). Let Y be a projecting, uniformly Σ1 operator. A Y -mouse

operator F with domain D is an operator with domain D such for each (0, X) ∈ D, F(0, X) /

LpY (X), and for each (1, X) ∈ D, F(1, X) /LpY+(X).6 (So any Y -mouse operator is an operator.)

A Y -mouse operator F is called first order if there are formulas ϕ1 and ϕ2 in the language of

Y -premice such that F(0, X) (F(1, X)) is the first M� LpY (X) (LpY+(X)) satisfying ϕ1 (ϕ2).

A mouse operator is a Jm1 -mouse operator.

We can then define F-solidity, the LF [E]-construction etc. as usual (see [16] for more details).

We now define the kind of condensation that mouse operators need to satisfy to ensure the LF [E]

converges.

Definition 5.10. Let M1,M2 be k-sound J -models over a1, a2 and let π :M1 →M2. Then π is

(weakly, nearly) k-good iff π � a1 = id, π(a1) = a2, and π is a (weak, near) k-embedding (as in

[3]).

Definition 5.11. Given a J -model N over a, and M / N such that M is fully sound, the M-

drop-down sequence of N is the sequence of pairs 〈(Qn,mn)〉n<k of maximal length such that

Q0 =M and m0 = ω and for each n+ 1 < k:

1. M /Qn+1 E N and Qn E Qn+1,

2. every proper segment of Qn+1 is fully sound,

5Countable substructures of M are (ω, ω1 + 1)-F-iterable, i.e. all iterates are F-premice. See [16, Section 2] formore details on F-iterability.

6This restricts the usual notion defined in [11].

12

Page 13: CORE MODEL INDUCTION NOTES - univie.ac.at

3. ρmn(Qn) is an a-cardinal of Qn+1,

4. 0 < mn+1 < ω,

5. Qn+1 is (mn+1 − 1)-sound,

6. ρmn+1(Qn+1) < ρmn(Qn) ≤ ρmn+1−1(Qn+1).

Definition 5.12. Let F be an operator and let C be some class of E-active F-premice. Let b

be transitive. A (C-certified) LF [E, b]-construction is a sequence 〈Nα〉α≤λ with the following

properties. We omit the phrase “over b”.

We have N0 = b and N1 = F(0, b).

Let α ∈ (0, λ]. Then Nα is an F-premouse, and if α is a limit then Nα is the lim inf of the Nβfor β < α. Now suppose that α < λ. Then either:

• Nα is passive and is a limit of whole proper segments and Nα+1 = (Nα, G) for some extender

G (with Nα+1 ∈ C); or

• Nα is ω-F-solid. Let Mα = Cω(Nα). Let M be the largest whole segment of Mα. So either

Mα = M or Mα ↓ M E F1(M). Let N E F1(M) be least such that either N = F1(M)

or for some k + 1 < ω, (N ↓ b, k + 1) is on the Mα-drop-down sequence of N ↓ b. Then

Nα+1 = N ↓ b. (Note Mα /Nα+1.)

Definition 5.13. Let Y be an operator. We say that Y condenses coarsely iff for all i ∈ {0, 1}and (i, X), (i,X) ∈ dom(Y ), and all J -modelsM+ over X, if π :M+ → Yi(X) is fully elementary,

fixes the parameters in the definition of Y , then

1. if i = 0 then M+ E Y0(X); and

2. if i = 1 and X is a sound whole Y -premouse, then M+ E Y1(X).

Definition 5.14. Let Y be a projecting, uniformly Σ1 operator. We say that Y condenses finely

iff Y condenses coarsely and we have the following. Let k < ω. Let M∗ be a Y -premouse over a,

with a largest whole proper segment M, such that M+ =M∗ ↓ M is sound and ρk+1(M+) =M.

Let P∗, a,P,P+ be likewise. Let N be a sound whole Y -premouse over a. Let G ⊆ Col(ω,P ∪ N )

be V -generic. Let N+, π, σ ∈ V [G], with N+ a sound J -model over N such that N ∗ = N+ ↓ a is

defined (i.e. acceptable). Suppose π : N ∗ →M∗ is such that π(N ) =M and either:

1. M∗ is k-sound and N ∗ = Ck+1(M∗); or

2. (N ∗, k + 1) is in the N -dropdown sequence of N ∗, and likewise (P∗, k + 1),P, and either:

(a) π is k-good, or

(b) π is fully elementary, or

(c) π is a weak k-embedding, σ : P∗ → N ∗ is k-good, σ(P) = N and π ◦ σ ∈ V is a near

k-embedding.

13

Page 14: CORE MODEL INDUCTION NOTES - univie.ac.at

Then N+ E Y1(N ).

We say that Y almost condenses finely iff N+ E Y1(N ) whenever the hypotheses above hold

with N+, π, σ ∈ V .

In fact, the two notions above are equivalent.

Lemma 5.15. Let Y be an operator on a cone with base in HC. Suppose that Y almost condenses

finely. Then Y condenses finely.

We end this section with the following lemma (proved in Section 2 of [16]), which states that

the LF [E]-construction (relative to some class of background extenders) runs smoothly for a certain

class of operators. In the following, if (N , G) ∈ C, then G is backgrounded as in [3] or as in [13]

(we additionally demand that the structure N in [13, Definition 1.1] is closed under F).

Lemma 5.16. Let F be a projecting, uniformly Σ1 operator which condenses finely. Suppose F is

defined on a cone with bases in HC. Let C = 〈Nα〉α≤λ be the (C-certified) LF [E, b]-construction

for b ∈ CF . Then (a) Nλ is 0-F-solid (i.e., is an F-premouse).

Now suppose that Nλ is k-F-solid.

Suppose that for a club of countable elementary π : M → Ck(Nλ), there is an F-putative,

(k, ω1, ω1 + 1)-iteration strategy Σ for M, such that every tree T via Σ is (π,C)-realizable.7

Then (b) Nλ is (k + 1)-F-solid.

Lemma 5.17. Let Y,F be uniformly Σ1 operators defined on a cone over some Hκ, with bases in

HC.8 Suppose that Y condenses finely. Suppose that F is a whole continuing Y -mouse operator.

Then F condenses finely.

The following lemma gives a stronger condensation property than fine condensation in certain

circumstances. So if F satisfies the hypothesis of Lemma 5.18 (particularly, if F is one of the oper-

ators constructed in our core model induction) then the LF [E]-construction converges by Lemma

5.16.

Lemma 5.18. Let Y,F be uniformly Σ1 operators with bases in HC. Suppose that Y condenses

finely. Suppose that F is a whole continuing Y -mouse operator. Then (a) F condenses finely.

Moreover, (b) let M be an F-whole F-premouse. Let π : N → M be fully elementary with

aN ∈ CF . Then N is an F-whole F-premouse. So regarding F , the conclusion of 5.13 may be

modified by replacing “E” with “=”.

Remark 5.19. In the context of the core model induction of this paper (and elsewhere), we often

construct mouse operators F defined over some Hκ with base a /∈ HC. So given an F-premouse

N , π : N ∗ → N elementary, and N ∗ countable, N ∗ may not be an F premouse. We have to make

some changes for the theory above to work for these F . For instance, in Lemma 5.16, with the

notation as there, we can modify the hypothesis of the lemma in one of two ways:

7See [16, Section 2] for a precise definition of (π,C)-realizability. Roughly speaking this means that models alongthe tree T are embedded into the Nα’s.

8We also say “operator over Hκ with bases in HC” for short.

14

Page 15: CORE MODEL INDUCTION NOTES - univie.ac.at

1. We can either require that a ∈M, |M| = |a|, and the (π,C)-realizable strategy Σ is (k, |a|+, |a|++

1)-iterable.

2. We can still require M is countable but the strategy Σ is a (k, ω1, ω1 + 1)-Fπ-strategy, where

Fπ is the π-pullback operator of F .9

5.0.2. STRATEGY PREMICE

We now proceed to defining Σ-premice, for an iteration strategy Σ. We first define the operator to

be used to feed in Σ.

Definition 5.20 (B(a, T , b), bN ). Let a,P be transitive, with P ∈ J1(a). Let λ > 0 and let T be

an iteration tree10 on P, of length ωλ, with T � β ∈ a for all β ≤ ωλ. Let b ⊆ ωλ. We define

N = B(a, T , b) recursively on lh(T ), as the J -model N over a, with parameter P,11 such that:

1. l(N ) = λ,

2. for each γ ∈ (0, λ), N|γ = B(a, T � ωγ, [0, ωγ]T ),

3. BN is the set of ordinals o(a) + γ such that γ ∈ b,

4. EN = ∅.

We also write bN = b.

It is easy to see that every initial segment of N is sound, so N is acceptable and is indeed a

J -model (not just a J -structure).

In the context of a Σ-premouseM for an iteration strategy Σ, if T is the <M-least tree for which

M lacks instruction regarding Σ(T ), thenM will already have been instructed regarding Σ(T � α)

for all α < lh(T ). Therefore if lh(T ) > ω then B(M, T ,Σ(T )) codes redundant information (the

branches already in T ) before coding Σ(T ). This redundancy seems to allow one to prove slightly

stronger condensation properties, given that Σ has nice condensation properties (see Lemma 5.27).

It also simplifies the definition.

Definition 5.21. Let Σ be a partial iteration strategy. Let C be a class of iteration trees, closed

under initial segment. We say that (Σ, C) is suitably condensing iff for every T ∈ C such that

T is via Σ and lh(T ) = λ + 1 for some limit λ, either (i) Σ has hull condensation with respect to

T , or (ii) bT does not drop and Σ has branch condensation with respect to T , that is, any hull Uacof T ab is according to Σ.

When C is the class of all iteration trees according to Σ, we simply omit it from our notation.

9For instance, if F corresponds to a strategy Σ, then Fπ corresponds to Σπ, the π-pullback of Σ. If F is a firstorder mouse operator defined by (ϕ, a), then Fπ is defined by (ϕ, π−1(a)).

10We formally take an iteration tree to include the entire sequence⟨MTα

⟩α<lh(T )

of models. So it is Σ0(T ,P) to

assert that “T is an iteration tree on P”.11P = MT0 is determined by T .

15

Page 16: CORE MODEL INDUCTION NOTES - univie.ac.at

Definition 5.22. Let ϕ be an L0-formula. Let P be transitive. Let M be a J -model (over some

a), with parameter P. Let T ∈ M. We say that ϕ selects T for M, and write T = TMϕ , iff

(a) T is the unique x ∈M such that M � ϕ(x),

(b) T is an iteration tree on P of limit length,

(c) for every N /M, we have N 6� ϕ(T ), and

(d) for every limit λ < lh(T ), there is N /M such that N � ϕ(T � λ).

One instance of φ(P, T ) is, in the case a is self-wellordered, the formula “T is the least tree

on P that doesn’t have a cofinal branch”, where least is computed with respect to the canonical

well-order of the model.

Definition 5.23 (Potential P-strategy-premouse, ΣM). Let ϕ ∈ L0. Let P, a be transitive with

P ∈ J1(a). A potential P-strategy-premouse (over a, of type ϕ) is a J -model M over a,

with parameter P, such that the B operator is used to feed in an iteration strategy for trees on P,

using the sequence of trees naturally determined by SM and selection by ϕ. We let ΣM denote the

partial strategy coded by the predicates BM|η, for η ≤ l(M).

In more detail, there is an increasing, closed sequence of ordinals 〈ηα〉α≤ι with the following

properties. We will also define ΣM|η for all η ∈ [1, l(M)] and Tη = TMη for all η ∈ [1, l(M)).

1. 1 = η0 and M|1 = Jm1 (a;P) and ΣM|1 = ∅.

2. l(M) = ηι, so M|ηι =M.

3. Given η ≤ l(M) such that BM|η = ∅, we set ΣM|η =⋃η′<η ΣM|η

′.

Let η ∈ [1, l(M)]. Suppose there is γ ∈ [1, η] and T ∈ M|γ such that T = TM|γϕ , and T is via

ΣM|η, but no proper extension of T is via ΣM|η. Taking γ minimal such, let Tη = TM|γϕ . Otherwise

let Tη = ∅.

4. Let α+ 1 ≤ ι. Suppose Tηα = ∅. Then ηα+1 = ηα + 1 and M|ηα+1 = Jm1 (M|ηα;P) ↓ a.

5. Let α + 1 ≤ ι. Suppose T = Tηα 6= ∅. Let ωλ = lh(T ). Then for some b ⊆ ωλ, and

S = B(M|ηα, T , b), we have:

(a) M|ηα+1 E S.

(b) If α+ 1 < ι then M|ηα+1 = S.

(c) If S EM then b is a T -cofinal branch.12

(d) For η ∈ [ηα, l(M)] such that η < l(S), ΣM|η = ΣM|ηα.

(e) If S EM then then ΣS = ΣM|ηα ∪ {(T , bS)}.12We allow MTb to be illfounded, but then T b is not an iteration tree, so is not continued by ΣM.

16

Page 17: CORE MODEL INDUCTION NOTES - univie.ac.at

6. For each limit α ≤ ι, BM|ηα = ∅.

Definition 5.24 (Whole). Let M be a potential P-strategy-premouse of type ϕ. We say P is

ϕ-whole (or just whole if ϕ is fixed) iff for every η < l(M), if Tη 6= ∅ and Tη 6= Tη′ for all η′ < η,

then for some b, B(M|η, Tη, b) EM.13

Definition 5.25 (Potential Σ-premouse). Let Σ be a (partial) iteration strategy for a transitive

structure P. A potential Σ-premouse (over a, of type ϕ) is a potential P-strategy premouse

M (over a, of type ϕ) such that ΣM ⊆ Σ.14

Definition 5.26. Let R,M be J -structures for L0, a = aR and b = aM. Suppose that a, b code

P,Q respectively. Let π : R →M (or

π : o(R) ∪ P ∪ {P} → o(M) ∪Q ∪ {Q}

respectively). Then π is a (P,Q)-weak 0-embedding (resp., (P,Q)-very weak 0-embedding) iff

π(P) = (Q) and with respect to the language L0, π is Σ0-elementary, and there is an X ⊆ R (resp.,

X ⊆ o(R)) such that X is cofinal in ∈R and π is Σ1-elementary on parameters in X ∪ P ∪ {P}.If also P = Q and π � P ∪ {P} = id, then we just say that π is a P-weak 0-embedding (resp.,

P-very weak 0-embedding).

Note that, for (P,Q)-weak 0-embeddings, we can in fact take X ⊆ o(R). The following lemma

is again proved in [16, Section 3].

Lemma 5.27. Let M be a P-strategy premouse over a, of type ϕ, where ϕ is Σ1. Let R be a

J -structure for L0 and a′ = aR, and let P ′ be a transitive structure coded by a′.

1. Suppose π : R →M is a partial map such that π(P ′) = P and either:

(a) π is a (P ′,P)-weak 0-embedding, or

(b) π is a (P ′,P)-very weak 0-embedding, and if ER 6= ∅ then item 4 of 5.2 holds for ER.

Then R is a P ′-strategy premouse of type ϕ. Moreover, if π � {P ′} ∪ P ′ = id and if M is a

Σ-premouse, where (Σ, dom(ΣM)) is suitably condensing, then R is also a Σ-premouse.

2. Suppose there is π :M→R is such that π(a,P) = (a′,P ′) and either

(a) π is Σ2-elementary; or

(b) π is cofinal and Σ1-elementary, and BM = ∅.

Then R is a P ′-strategy premouse of type ϕ, and R is whole iff M is whole.

13ϕ-whole depends on ϕ as the definition of Tη does.14If M is a model all of whose proper segments are potential Σ-premice, and the rules for potential P-strategy

premice require that BM code a T -cofinal branch, but Σ(T ) is not defined, then M is not a potential Σ-premouse,whatever its predicates are.

17

Page 18: CORE MODEL INDUCTION NOTES - univie.ac.at

3. Suppose BM 6= ∅. Let T = TMη where η < l(M) is largest such that M|η is whole. Let

b = bM and ωγ =⋃b. So M E B(M|η, T , b). Suppose there is π : M → R such that

π(P) = P ′ and π is cofinal and Σ1-elementary. Let ωγ′ = supπ“ωγ.

(a) R is a P ′-strategy premouse of type ϕ iff we have either (i) ωγ′ = lh(π(T )), or (ii)

ωγ′ < lh(π(T )) and bR = [0, ωγ′]π(T ).

(b) If either bM ∈ M or π is continuous at lh(T ) then R is a P ′-strategy premouse of type

ϕ.

Remark 5.28. The preceding lemma left open the possibility that R fails to be a P-strategy pre-

mouse under certain circumstances (because BR should be coding a branch that has in fact already

been coded at some proper segment of R, but codes some other branch instead). In the main cir-

cumstance we are interested in, this does not arise, for a couple of reasons. Suppose that Σ is an

iteration strategy for P with hull condensation, M is a Σ-premouse, and Λ is a Σ-strategy for M.

Suppose π : M → R is a degree 0 iteration embedding and BM 6= ∅ and π is discontinuous at

lh(T ). Then [16, Section 3] shows that bM ∈ M. (It’s not relevant whether π itself is via Λ.) It

then follows from 3b of Lemma 5.27 that R is a Σ-mouse.

The other reason is that, supposing π :M→ R is via Λ (so π � P ∪ {P} = id), then trivially,

BR must code branches according to Σ. We can obtain such a Λ given that we can realize iterates

of M back into a fixed Σ-premouse (with P-weak 0-embeddings as realization maps).

Definition 5.29. Let P be transitive and Σ a partial iteration strategy for P. Let ϕ ∈ L0. Let

F = FΣ,ϕ be the operator such that:

1. F0(a) = Jm1 (a;P), for all transitive a such that P ∈ J1(a);

2. Let M be a sound branch-whole Σ-premouse of type ϕ. Let λ = l(M) and with notation as in

5.23, let T = Tλ. If T = ∅ then F1(M) = Jm1 (M;P). If T 6= ∅ then F1(M) = B(M, T , b)

where b = Σ(T ).

We say that F is a strategy operator.

Lemma 5.30. Let P be countable and transitive. Let ϕ be a formula of L0. Let Σ be a partial

strategy for P. Let Dϕ be the class of iteration trees T on P such that for some J -model M,

with parameter P, we have T = TMϕ . Suppose that (Σ, Dϕ) is suitably condensing. Then FΣ,ϕ is

uniformly Σ1, projecting, and condenses finely.

Definition 5.31. Let a be transitive and let F be an operator. We say that MF ,#1 (a) exists iff

there is a (0, |a|, |a| + 1)-F-iterable, non-1-small F-premouse over a. We write MF ,#1 (a) for the

least such sound structure. For Σ,P, a, ϕ as in 5.29, we write MΣ,ϕ,#1 (a) for MFΣ,ϕ,#

1 (a).

Let L+0 be the language L0 ∪ {≺, Σ}, where ≺ is the binary relation defined by “a is self-

wellordered, with ordering ≺a, and ≺ is the canonical wellorder of the universe extending ≺a”,

and Σ is the partial function defined “P is a transitive structure and the universe is a potential

18

Page 19: CORE MODEL INDUCTION NOTES - univie.ac.at

P-strategy premouse over a and Σ is the associated partial putative iteration strategy for P”. Let

ϕall(T ) be the L0-formula “T is the ≺-least limit length iteration tree U on P such that U is via Σ,

but no proper extension of U is via Σ”. Then for Σ,P, a as in 5.29, we sometimes write MΣ,#1 (a)

for MFΣ,ϕall,#

1 (a).

Let κ be a cardinal and suppose that M =MF ,#1 (a) exists and is (0, κ+ + 1)-iterable. We write

ΛM for the unique (0, κ+ + 1)-iteration strategy for M (given that κ is fixed).

Definition 5.32. We say that (F ,Σ, ϕ,D, a,P) is suitable iff a is transitive and MF ,#1 (a) exists,

where either

1. F is a projecting, uniformly Σ1 operator, CF is the (possibly swo’d) cone above a, D is the

set of pairs (i,X) ∈ dom(F) such that either i = 0 or X is a sound whole F-premouse, and

Σ = ϕ = 0, or

2. P,Σ, ϕ,Dϕ are as in 5.30, F = FΣ,ϕ, Dϕ ⊆ D, D is a class of limit length iteration trees on

P, via Σ, Σ(T ) is defined for all T ∈ D, (Σ, D) is suitably condensing and P ∈ J1(a).

We write GF for the function with domain CF , such that for all x ∈ CF , GF (x) = Σ(x) in case

(ii), and in case (i), GF (0, x) = F(0, x) and GF (1, x) is the least R E F1(x) ↓ ax such that either

R = F1(X) ↓ aX or R is unsound.

Lemma 5.33. Let F be as in 5.32 and M =MF ,#1 . Then ΛM has branch condensation and hull

condensation.

5.0.3. G-ORGANIZED F-PREMICE

Now we give an outline of the general treatment of [16] on F-premice over an arbitrary set; following

the terminology of [16], we will call these g-organized F-premice and Θ-g-organized F-premice. For

(Θ)-g-organized F-premice to be useful, we need to assume that the following absoluteness property

holds of the operator F . We then show that if F is the operator for a nice enough iteration strategy,

then it does hold. We write M for MF and fix a,P,F ,P, C as in the previous subsection. In the

following, δM denotes the Woodin cardinal of M. Again, the reader should see [16] for proofs of

lemmas stated here.

We need to work with the g-organized hierarchies to ensure various S-constructions succeed.

This in turn is important in extending operators to generic extensions (cf. Lemma 6.5). We need

to work with the Θ-g-organized hierarchies (a slight variation of the g-organized hierarchies), in

particular, we need to work in the hierarchy of Θ-g-organized mice over R to ensure the scales

analysis goes through as in Lp(R).

Definition 5.34. Let (F ,Σ, ϕ, C, a,P) be suitable. We say that MF ,]1 (a) generically interprets

F15 iff, writing M =MF ,#1 (a), there are formulas Φ,Ψ in L0 such that there is some γ > δM such

15In [16], this notion is called F determines itself on generic extensions. In this paper, “determines itself on genericextensions” will have a different meaning, as defined later.

19

Page 20: CORE MODEL INDUCTION NOTES - univie.ac.at

that M|γ � Φ and for any non-dropping ΣM-iterate N of M, via a countable iteration tree T , any

N -cardinal δ, any γ ∈ Ord such that N|γ � Φ & “δ is Woodin”, and any g which is set-generic over

N|γ (with g ∈ V ), then (N|γ)[g] is closed under GF , and GF � (N|γ)[g] is defined over (N|γ)[g] by

Ψ. We say such a pair (Φ,Ψ) generically determines (F ,Σ, ϕ, C, a) (or just F).

We say an operator F is nice iff for some Σ, ϕ, C, a,P, (F ,Σ, ϕ, C, a,P) is suitable and MF ,]1

generically interprets F .

Let P ∈ HC, let Σ be an iteration strategy for P and let C be the class of all limit length trees

via Σ. SupposeMΣ,#1 (P) exists, (Σ, C) is suitably condensing. We say thatMΣ,#

1 (P) generically

interprets Σ iff some (Φ,Ψ) generically determines (FΣ,ϕall,Σ, ϕall, C,P). (Note then that the

latter is suitable.)

Lemma 5.35. Let N , δ, etc, be as in 5.34, except that we allow T to have uncountable length, and

allow g to be in a set-generic extension of V . Then (N|γ)[g] is closed under GF and letting G′ be

the interpretation of Ψ over (N|γ)[g], G′ � C = GF � (N|γ)[g].

We fix a nice F , M, ΛM = Λ, (Φ,Ψ) for the rest of the section. We define MΣ1 from M in the

standard way.

See [16, Section 4] for a proof that if Σ is a strategy (of a hod mouse, a suitable mouse) with

branch condensation and is fullness preserving with respect to mice in some sufficiently closed,

determined pointclass Γ or if Σ is the unique strategy of a sound (Y )-mouse for some mouse

operator Y that is projecting, uniformly Σ1, MY,]1 generically interprets Y , and condenses finely

then MF ,]1 generically interprets F .

Now we are ready to define g-organized F-premice.

Definition 5.36 (Sargsyan, [6]). Let M be a transitive structure. Let G be the name for the generic

G ⊆ Col(ω,M) and let xG be the canonical name for the real coding {(n,m) | G(n) ∈ G(m)}, where

we identify G with⋃G. The tree TM for making M generically generic, is the iteration tree

T on M of maximal length such that:

1. T is via Λ and is everywhere non-dropping.

2. T � o(M)+1 is the tree given by linearly iterating the first total measure of M and its images.

3. Suppose lh(T ) ≥ o(M)+2 and let α+1 ∈ (o(M), lh(T )). Let δ = δ(MTα ) and let B = B(MTα )

be the extender algebra of MTα at δ. Then ETα is the extender E with least index in MTα such

that for some condition p ∈ Col(ω,M), p “There is a B-axiom induced by E which fails for

xG”.

Assuming that M is sufficiently iterable, then TM exists and has successor length.

Sargsyan noticed that one can feed in F into a structure N indirectly, by feeding in the branches

for TM, for various M E N . The operator gF , defined below, and used in building g-organized

F-premice, feeds in branches for such TM. We will also ensure that being such a structure is

20

Page 21: CORE MODEL INDUCTION NOTES - univie.ac.at

first-order - other than wellfoundedness and the correctness of the branches - by allowing sufficient

spacing between these branches.

In the following, we let N T denote the last model of the tree T .

Definition 5.37. Given a formula Φ. Given a successor length, nowhere dropping tree T on M,

let PΦ(T ) be the least P E N T such that for some cardinal δ′ of N T , we have δ′ < o(P ) and

P � Φ+“δ′ is Woodin”. Let λ = λΦ(T ) be least such that PΦ(T ) E MTλ . Then δ′ is a cardinal of

MTλ . Let IΦ = IΦ(T ) be the set of limit ordinals ≤ λ.

We can now define the operator used for g-organization:

Definition 5.38 (gF). We define the forgetful operator gF , for F such that MF ,]1 generically

interprets F as witnessed by a pair (Φ,Ψ). Let b be a transitive structure with M ∈ J1(b).

We define M = gF(b), a J -model over b, with parameter M, as follows.

For each α ≤ l(M), EM|α = ∅.Let α0 be the least α such that Jα(b) � ZF. Then M|α0 = Jm

α0(b;M).

Let T = TM|α0. We use the notation PΦ = PΦ(T ), λ = λΦ(T ), etc, as in 5.37. The predicates

BM|γ for α0 < γ ≤ l(M) will be used to feed in branches for T � λ + 1, and therefore PΦ itself,

into M. Let 〈ξα〉α<ι enumerate IΦ ∪ {0}.There is a closed, increasing sequence of ordinals 〈ηα〉α≤ι and an increasing sequence of ordinals

〈γα〉α≤ι such that:

1. η1 = γ0 = η0 = α0.

2. For each α < ι, ηα ≤ γα ≤ ηα+1, and if α > 0 then γα < ηα+1.

3. γι = l(M), so M =M|γι.

4. Let α ∈ (0, ι). Then γα is the least ordinal of the form ηα + τ such that T � ξα ∈ Jτ (M|ηα)

and if α > α0 then δ(T � ξα) < τ . (We explain below why such τ exists.) And M|γα =

Jmτ (M|ηα;M) ↓ b.

5. Let α ∈ (0, ι). Then M|ηα+1 = B(M|γα, T � ξα,Λ(T � ξα)) ↓ b.

6. Let α < ι be a limit. Then M|ηα is passive.

7. γι is the least ordinal of the form ηι + τ such that T � λ+ 1 ∈ Jηι+τ (M|ηι) and τ > o(MTλ );

with this τ , M = Jmτ (M|ηι;M) ↓ b and furthermore, gF(b) is acceptable and every strict

segment of gF(b) is sound.

Remark 5.39. It’s not hard to see (cf. [16]) M EM = gF(b), the sequences 〈M|ηα〉α≤ι∩M and

〈M|γα〉α≤ι ∩ M and 〈T � α〉α≤λ+1 ∩ M are ΣM1 in L−0 , uniformly in b and M.

Definition 5.40. Let b be transitive with M ∈ J1(b). A potential g-organized F-premouse

over b is a potential gF-premouse over b, with parameter M.

21

Page 22: CORE MODEL INDUCTION NOTES - univie.ac.at

Lemma 5.41. There is a formula ϕg in L0, such that for any transitive b with M ∈ J1(b), and

any J -structure M over b, M is a potential g-organized F-premouse over b iff M is a potential

ΛM-premouse over b, of type ϕg.

Lemma 5.42. gF is projecting, uniformly Σ1, basic, and condenses finely.

Definition 5.43. Let M be a g-organized F-premouse over b. We say M is F-closed iff M is a

limit of gF-whole proper segments.

Because MF ,]1 generically interprets F , F-closure ensures closure under GF :

Lemma 5.44. Let M be an F-closed g-organized F-premouse over b. Then M is closed under

GF . In fact, for any set generic extension M[g] of M, with g ∈ V , M[g] is closed under GF and

GF �M[g] is definable over M[g], via a formula in L−0 , uniformly in M, g.

The analysis of scales in LpgF (R) runs into a problem (see [16, Remark 6.8] for an explanation).

Therefore we will analyze scales in a slightly different hierarchy.

Definition 5.45. Let X ⊆ R. We say that X is self-scaled iff there are scales on X and R\Xwhich are analytical (i.e., Σ1

n for some n < ω) in X.

Definition 5.46. Let b be transitive with M ∈ J1(b).

Then GF(b) denotes the least N E gF(b) such that either N = gF(b) or J1(N ) �“Θ does not

exist”. (Therefore Jm1 (b;M) E GF(b).)

We say that M is a potential Θ-g-organized F-premouse over X iff M ∈ HCM and for

some X ⊆ HCM, M is a potential GF-premouse over (HCM, X) with parameter M and M �“X

is self-scaled”. We write XM = X.

In our application to core model induction, we will be most interested in the cases that either

X = ∅ or X = F � HCM. Clearly Θ-g-organized F-premousehood is not first order. Certain

aspects of the definition, however, are:

Definition 5.47. Let “I am a Θ-g-organized premouse over X” be the L0 formula ψ such that for

all J -structures M and X ∈ M we have M � ψ(X) iff (i) X ⊆ HCM; (ii) M is a J -model over

(HCM, X); (iii) M|1 �“X is self-scaled”; (iv) every proper segment of M is sound; and (v) for

every N EM:

1. if N �“Θ exists” then N ↓ (N|ΘN ) is a PN -strategy premouse of type ϕg;

2. if N �“Θ does not exist” then N is passive.

Lemma 5.48. Let M be a J -structure and X ∈ M. Then the following are equivalent: (i) Mis a Θ-g-organized F-premouse over X; (ii) M �“I am a Θ-g-organized premouse over X” and

PM = M and ΣM ⊆ ΛM; (iii) M|1 is a Θ-g-organized premouse over X and every proper segment

of M is sound and for every N EM,

22

Page 23: CORE MODEL INDUCTION NOTES - univie.ac.at

1. if N �“Θ exists” then N ↓ (N|ΘN ) is a g-organized F-premouse;

2. if N �“Θ does not exist” then N is passive.

Lemma 5.49. GF is basic and condenses finely.

Definition 5.50. Suppose F is a nice operator and is an iteration strategy and X ⊆ R is self-

scaled. We define LpGF (R, X) as the stack of all Θ-g-organized F-mice N over (Hω1 , X) (with

parameter M). We also say (Θ-g-organized) F-premouse over R to in fact mean over Hω1.

Remark 5.51. It’s not hard to see that for any such X as in Definition 5.50, ℘(R)∩LpgF (R, X) =

℘(R)∩LpGF (R, X). SupposeM is an initial segment of the first hierarchy andM is E-active. Note

that M � “Θ exists” and M|Θ is F-closed. By induction below M|ΘM, M|ΘM can be rearranged

into an initial segment N ′ of the second hierarchy. Above ΘM, we simply copy the E and B-sequence

from M over to obtain an N � LpGF (R, X) extending N ′.

In core model induction applications, we often have a pair (P,Σ) where P is a hod premouse

and Σ is P’s strategy with branch condensation and is fullness preserving (relative to mice in

some pointclass) or P is a sound (hybrid) premouse projecting to some countable set a and Σ is

the unique (normal) ω1 + 1-strategy for P. Let F be the operator corresponding to Σ (using the

formula ϕall) and suppose MF ,]1 exists. [16, Lemma 4.8] shows that F condenses finely and MF ,]1

generically interprets F . Also, the core model induction will give us that F � R is self-scaled.16Thus, we can define Lp

GF (R,F � R) as above (assuming sufficient iterability of MF ,]1 ). A core

model induction is then used to prove that LpGF (R,F � R) � AD+. What’s needed to prove this is

the scales analysis of LpGF (R,F � R), from the optimal hypothesis (similar to those used by Steel;

see [14] and [10]).17 This is carried out in [16]; we will not go into details here, though we simply

note that for the scales analysis to go through under optimal hypotheses, we need to work with the

Θ-g-organized hierarchy, instead of the g-organized hieararchy.

5.0.4. CORE MODEL INDUCTION OPERATORS

Suppose F is a nice operator and Γ is an inductive-like pointclass that is determined. Let M =

MF ,]1 . LpgF (x) is defined as in the previous section. We write Lp

gF ,Γ(x) for the stack of gF-premice

M over x such that every countable, transitive M∗ embeddable into M has an ω1-gF-iteration

strategy in Γ.

Definition 5.52. Let t ∈ HC with M ∈ J1(t). Let 1 ≤ k < ω. A premouse N over t is F-Γ-k-

suitable (or just k-suitable if Γ and F are clear from the context) iff there is a strictly increasing

sequence 〈δi〉i<k such that

16We abuse notation here, and will continue to do so in the future. Technically, we should write F �HC.17Suppose P = M]

1 and Σ is P’s unique iteration strategy. Let F be the operator corresponding to Σ. Suppose

LpGF (R,F � R) � AD+ + MC. Then in fact Lp

GF (R) ∩ ℘(R) = Lp(R) ∩ ℘(R). This is because in L(LpGF (R,F � R)),

L(℘(R)) � AD+ + Θ = θ0 + MC and hence by [7], in L(LpGF (R,F � R)), ℘(R) ⊆ Lp(R). Therefore, even though the

hierarchies Lp(R) and LpGF (R,F � R) are different, as far as sets of reals are concerned, we don’t lose any information

by analyzing the scales pattern in LpGF (R,F � R) instead of that in Lp(R).

23

Page 24: CORE MODEL INDUCTION NOTES - univie.ac.at

1. ∀δ ∈ N , N �“δ is Woodin” if and only if ∃i < k(δ = δi).

2. o(N ) = supi<ω(δ+ik−1)N .

3. If N|η is a gF-whole strong cutpoint of N then N|(η+)N = LpgF ,Γ(N|η).18

4. Let ξ < o(N ), where N �“ξ is not Woodin”. Then CΓ(N|ξ) �“ξ is not Woodin”.

We write δNi = δi; also let δN−1 = 0 and δNk = o(N ).

Definition 5.53 (relativizes well). Let F be a Y -mouse operator for some operator Y . We say

that F relativizes well if there is a formula φ(x, y, z) such that for any a, b ∈ dom(F) such that

a ∈ L1(b) and have the same cardinality, whenever N is a transitive model of ZFC− such that N is

closed under Y , F(b) ∈ N then F(a) ∈ N and is the unique x ∈ N such that N � φ[x, a,F(b)].

Definition 5.54 (determines itself on generic extensions). Suppose F is a Y -mouse operator for

some operator Y . We say that F determines itself on generic extensions if there is a formula

φ(x, y, z), a parameter a such that for almost all transitive structures N of ZFC− such that ω1 ⊂ N ,

N contains a and is closed under F , for any generic extension N [g] of N in V , F ∩N [g] ∈ N [g]

and is definable over N [g] via (φ, a), i.e. for any x ∈ N [g] ∩ dom(F), F(a) = b if and only if b is

the unique c ∈ N [g] such that N [g] � φ[x, c, a].19

The following definition gives examples of “nice model operators”. This is not a standard

definition and is given here for convenience more than anything. These are the kind of model

operators that the core model induction in this paper deals with. We by no means claim that

these operators are all the useful model operators that one might consider. Recall we fixed a V -

generic G ⊆ Col(ω, κ). These operators are obtained from the W ∗α,Wα hypotheses formulated for

LpGF (R,F � R)|α; these are straightforward generalizations of the W ∗α,Wα hypotheses for Jα(R).

Definition 5.55 (Core model induction operators). Suppose (P,Σ) is a hod pair below κ; assume

furthermore that Σ is a (λ+, λ+)-strategy. Let F = FΣ,ϕall(note that F , gF are basic, projecting,

uniformly Σ1, and condenses finely). Assume F � R is self-scaled. We say J is a Σ core model

induction operator or just a Σ-cmi operator if in V [G], one of the following holds:

1. J is a projecting, uniformly Σ1, first order F-mouse operator (or gF-mouse operator) defined

on a cone of (Hω1)V [G] above some a ∈ (Hω1)V [G]. Furthermore, J relativizes well.

2. For some α ∈ OR such that α ends either a weak or a strong gap in the sense of [14]

and [16], letting M = LpGF (R,F � R)||α and Γ = (Σ1)M , M � AD+ + MC(Σ)20. For

some transitive b ∈ HV [G]ω1 and some g-organized F-premouse Q over b, J = FΛ, where Λ

18Literally we should write “N|(η+)N = LpΓ(N|η) ↓ t”, but we will be lax about this from now on.19By “almost all”, we mean for all such N with the properties listed above and N satisfies some additional property.

In practice, this additional property is: N is closed under MF,]1 .20MC(Σ) stands for the Mouse Capturing relative to Σ which says that for x, y ∈ R, x is OD(Σ, y) (or equivalently

x is OD(F , y)) iff x is in some gF-mouse over y. SMC is the statement that for every hod pair (P,Σ) such that Σ isfullness preserving and has branch condensation, then MC(Σ) holds.

24

Page 25: CORE MODEL INDUCTION NOTES - univie.ac.at

is an (ω1, ω1)-iteration strategy for a 1-suitable (or more fully F-Γ-1-suitable) Q which is

Γ-fullness preserving, has branch condensation and is guided by some self-justifying-system

(sjs) ~A = (Ai : i < ω) such that ~A ∈ ODMb,Σ,x for some real x and ~A seals the gap that ends

at α21.

6. Lifting Operators

We assume the hypothesis of (i). Suppose (P∗,Σ) is a hod pair below κ such that Σ is an (κ+, κ+)-

strategy in V [G] and Σ � V ∈ V (or (P∗,Σ) = (∅, ∅)). We fix a V -generic G ⊆ Col(ω, γ), where

γ < κ is such that γω and γ < cof(LpΣ(A)) for some A ⊆ κ coding Hκ. Suppose J is a Σ-

cmi-operator. We assume J is defined on a cone in HV [G]κ+ above some x ∈ HV

κ+ and J � V ∈ V .22

Let F = FΣ,ϕall. Again, we fix A ⊆ κ coding Vκ and cof(o(LpF (A))) < γ < κ, where γ is

countably closed and a V -generic G ⊆ Col(ω, γ).

We may assume A code P∗. We set

LpΣ1 (A) = LpF (A).

Suppose LpΣα(A) has been defined for α < κ+,

LpΣα+1(A) = LpF+(LpΣ

α(A)),23 and

for ξ < κ+ limit,

LpΣξ (A) =

⋃α<ξ LpΣ

α(A).

Lemma 6.1. Let A be as above. Then LpΣκ+(A) � λ+ exists. Furthermore, cof(λ+)Lp

Σκ+ (A) < κ.

Proof. Suppose not. This easily implies that we can construct over LpΣκ+(A) a �κ-sequence24. This

contradicts ¬�λ in V . Now the cofinality clause follows from the fact that κ is singular.

Let S be the set of X ≺ Hκ++ such that γ ⊆ X, |X| = γ, Xω ⊆ X, and X is cofinal in the

ordinal height of LpΣ(A) and J, (P∗∪{P∗},Σ) ∈ X.25 So S is stationary. We let πX : MX → Hκ++

be the uncollapsed map and λX be the critical point of πX ; note that λX > γ. We now prove some

lemmas about “lifting” operators.

Lemma 6.2 (Full hulls). Suppose B∗ ⊆ κ. Suppose X ∈ S such that A∗ ∈ X and X is cofinal in

LpΣ(B∗). Let πX(B) = B∗. Then LpΣ(B) ⊆MX .

21This implies that ~A is Wadge cofinal in Env(Γ), where Γ = ΣM1 . Note that Env(Γ) = ℘(R)M if α ends a weak

gap and Env(Γ) = ℘(R)LpΣ(R)|(α+1) if α ends a strong gap.22We note the specific requirement that the cone over which J is defined is above some x ∈ V . These are the

Σ-cmi-operators that we will propagate in our core model induction. We will not deal with all Σ-cmi-operators.23LpF∗,+(LpΣ

α(A)) is defined similarly to LpF∗ but here we stack continuing, F-sound F-premice.24Squares hold in LpΣ

κ+(A) because Σ has hull and branch condensation.25This means (P∗,Σ � V ) ∈ X and Σ ∈ X[G] but we will abuse notation here.

25

Page 26: CORE MODEL INDUCTION NOTES - univie.ac.at

Proof. We just prove the first clause. Suppose not. Then let M� LpΣ(B) be the least counterex-

ample. Let E be the (λX , κ)-extender derived from πX . Let N = Ult(M, E). Then any countable

transitiveN ∗ embeddable intoN (via σ) is embeddable intoM (via τ) such that iE◦τ = σ by count-

able completeness of E. So N ∗ is ω1 +1 Σσ-iterable becauseM�LpΣ(B), σ−1(P∗) = τ−1(P∗), and

σ � σ−1(P∗) = τ � σ−1(P∗). So N � LpΣ(B∗). But since πX is cofinal in LpΣ(A∗), N /∈ LpΣ(B∗).

Contradiction.

Lemma 6.3 (Lifting in V ). 1. If H is defined by (ψ, a) on HV [G]ω1 (as in clause 1 of 5.55) with

a ∈ V and H � V ∈ V , then H can be extended to an operator H+ defined by (ψ, a) on Hκ+.

Furthermore, H+ relativizes well.

2. If (Q, F ) and Γ are as in clause 2 of Definition 5.55, where F plays the role of Λ there with

(Q, F � V ) ∈ V , then F can be extended to a (κ+, κ+)-strategy that has branch condensation.

Furthermore, there is a unique such extension.

Proof. To prove 1), first let A∗ be a bounded subset of κ (in the cone above a) and let X ∈ S

such that A,A∗ ∈ X and X is cofinal in LpΣ(A∗). Let πX(AX , A∗X) = (A,A∗). We assume H is

an F-mouse operator. By Lemma 6.2, H(AX) ∈ MX and since H relativizes well, H(A∗X) ∈ MX

(we use the fact that A∗ ∈ L1[A]). Hence we can define H+(A∗) = πX(H(A∗X)) (as the first level

M� LpΣ(A∗) that satisfies ψ[A∗, a]). This defines H+ on all bounded subsets of κ.

Now let A∗ be a bounded subset of κ∗. Let X ≺ Hκ++ be such that X is countably closed,

|X| < κ, and X is cofinal in LpΣ(A∗). Let πX : MX → X be the uncollapse map. By Lemma

6.2, LpΣ(A∗X) ⊆ MX where A∗X = π−1X (A∗). So H(A∗X) ∈ MX and we can define H+(A∗) to be

πX(H(A∗X)). It’s easy to see then that H+ also relativizes well (Exercise: verify this).

We first prove the “uniqueness” clause of 2). Suppose F1 and F2 are two extensions of F and let

T be according to both F1 and F2. Let b1 = F1(T ) and b2 = F2(T ). If b1 6= b2 then cof(lh(T )) = ω.

So letting T ∗ be a hull of T such that |T ∗| ≤ γ and letting π : T ∗ → T be the hull embedding,

then b1 ∪ b2 ⊆ rng(π). Then π−1[b1] = F (T ∗) 6= π−1[b2] = F (T ∗). Contradiction.

To show existence, let Fγ+ = F . Inductively for each γ+ ≤ ξ < κ such that ξ is a limit ordinal,

we define a strategy Fξ extending Fα for α < ξ and Fξ acts on trees of length ξ. For X ≺ Y ≺ Hκ++ ,

let πX,Y = π−1Y ◦ πX . Let T be a tree of length ξ such that for all limit ξ∗ < ξ, T � ξ∗ is according

to Fξ∗ . We want to define Fξ(T ).

For X ∈ S such that X is cofinal in LpΣ+(A), let (TX , ξX) = π−1

X (T , ξ) and bX = F (TX). Let

cX be the downward closure of πX [bX ] and cX,Y be the downward closure of πX,Y [bX ].

Claim: ∀∗X ∈ S26, for any γ < ξ, γ ∈ cX or γ /∈ cX .

Proof. The proof is similar to that of Lemma 2.5 in [9] so we only sketch it here. Suppose for

contradiction that there is some γ, there are stationarily many X ∈ S such that γ ∈ cX and there

are stationarily many Y ∈ S such that γ /∈ cY . Suppose first cof(ξ) ∈ [ω1, γ]. Note that crt(πX),

26This means there is a club set C

26

Page 27: CORE MODEL INDUCTION NOTES - univie.ac.at

crt(πY ) > γ. It’s easy then to see that πX [bX ] is cofinal in ξ and πY [cY ] is cofinal in ξ. Hence

cX = cY . Contradiction.

Now suppose cof(ξ) = ω. Fix a surjection f : θ � ξ (where θ = |ξ|). ∀∗X ∈ S (f, ξ) ∈ X so let

(fX , ξX , θX) = π−1X (f, ξ, θ). For each such X, let αX be least such that fX [αX ] ∩ bX is cofinal in

ξX . By Fodor’s lemma,

∃α∃U (U is stationary ∧ ∀X ∈ U αX = α).

By symmetry and by thinning out U , we may assume

X ∈ U ⇒ π−1X (γ) ∈ bX .

Fix Y ∈ S such that γ /∈ cY and α < λY . Since U is stationary, there is some X ∈ U such that

Y ≺ X, which implies

πY,X [fY [α]] = fX [α]

is cofinal in bX and hence T aY π−1Y,X [bX ] is a hull of T aX bX . Since F condenses well, π−1

Y,X [bX ] = bY .

This contradicts the fact that π−1X (γ) ∈ bX but π−1

Y (γ) /∈ bY .

Finally, suppose cof(ξ) ≥ γ+. The case ∀∗X TX is maximal is proved exactly as in Lemma 1.25

of [9] (Exercise: Please read this argument in [9]). The main point is the following: for any X ≺ Yin the above club, if supπX,Y [TX ] = λ < lh(TY ), then TY = TY |λaU where cX,Y = [0, λ]TY , and Uis a tree on MTYλ . In other words, bY = [0, λ]aTY c, where c is a cofinal branch of U . So cX,Y ⊆ bY .

In the case λ = lh(TY ), we get cX,Y = bY .

Suppose TX is short and is according to F . Note that lh(TX) has uncountable cofinality (in V ).

We claim that ∀∗X ∈ S bX = F (TX) ∈MX . Given the claim we get that for any two such X ≺ Ysatisfying the claim, πX,Y (bX) is cofinal in TY and hence πX,Y (bX) = bY . This gives cX,Y is an

initial segment of bY , which is what we want to prove.

Now to see ∀∗X ∈ S bX = F (TX) ∈ MX . Q(TX) is the least Q � LpΣ,Γ+ (M(TX)) that defines

the failure of Woodinness of δ(TX). Since δ(T ) has uncountable cofinality (in V and in V [G]),

by a standard interpolation argument, whenever M0,M1 ∈ LpΣ,Γ+ (M(TX)) then we have either

M0 �M1 or M1 �M0. So the “leastness” of Q(TX) is justified in this case. By the same proof

as that of Lemma 6.2 and the fact that X is cofinal in LpΣ+(A), M(T ) is coded in to A, and

LpΣ,Γ+ (M(TX)) � LpΣ

+(M(T )), we get Q(TX) ∈MX .

Now F (TX) = bX is the unique branch b such that Q(b, TX) exists and is a Σ-premouse over

M(TX) and hence is Q(TX). The uniqueness of bX follows from a standard comparison argument.

By an absoluteness argument and the fact that Q(TX) ∈MX , bX ∈MX . We’re done.

Letting C be the club as in the claim, we can just define

γ ∈ Fξ(T )⇔ ∀X ∈ C ∩ S γ ∈ cX .

For κ ≤ ξ < κ+, for T of length ξ by Fξ, we define Fξ+1(T ) in a similar fashion as above,

except now we let S be the collection of hulls X such that |X| < κ, countably closed, and cofinal

in Lp(M(T )).

27

Page 28: CORE MODEL INDUCTION NOTES - univie.ac.at

It’s easy to verify that with this definition, the unique extension of F to a (κ+, κ+) strategy has

branch condensation (Exercise: Verify this). This completes the proof sketch of the lemma.

Remark 6.4. (i) In the proof of the claim above, we do not use our hypothesis in the cases where

ω ≤ cof(ξ) ≤ γ, but we seem to need our hypothesis in the case cof(ξ) > γ.

(ii) If we assume instead full PFA, in particular, �(α) fails for every cardinal α ≥ ω3, then we

can show that for any (N ,Σ) such that |N | ≤ ω2 and Σ is a (ω3, ω3)-iteration strategy for Nwith branch condensation, then we can extend Σ uniquely to a strategy Σ+ acting on all stacks

of normal trees in V . Let T be according to Σ+ and T has limit length. We define Σ+(T )

as follows: if cof(lh(T )) < ω3, we define Σ+(T ) by the procedure in the proof of the Claim in

Lemma 6.3. Otherwise, observe that ~C = {[0, λ]T : λ < lh(T )} is a coherent sequence and

by ¬�(lh(T )), we get a thread D. This in turns gives us a cofinal (necessarily unique and

well-founded) branch b of T . Let Σ+(T ) be this b.

The following sometimes comes up during CMI beyond L(R), but we do not need it for proving

AD holds in L(R).

Lemma 6.5 (Extending to generic extensions). 1. If H is a defined by (ψ, a) on HV [G]ω1 as in

clause 1 of 5.55 with a ∈ HVγ+, then H can be extended to a first order mouse operator H+

defined by (ψ, a) on HV [G]κ+ . Furthermore, H+ relativizes well and if H determines itself on

generic extensions then so does H+.

2. If (Q,Λ) and Γ are as in clause 2 of Definition 5.55, where F plays the role of J there

and (Q,Λ � V ) ∈ V , then Λ can be extended to a unique (κ+, κ+)-strategy that has branch

condensation in V [G].

Proof. For (1), let b ∈ HV [G]κ+ and let τ ∈ HV

κ+ be a nice Col(ω, γ)-name for b (Col(ω, γ) is γ+-cc so

such a name exists since κ is singular strong limit).27 Assume H is a Σ-mouse operator (the other

case is proved similarly). Let X ∈ S be such that P∗ ∪ {P∗},Σ, b, τ,H(τ) ∈ X[G]; here we use

Lemma 6.3 to get that H(τ) is defined. Let (b, τ) = π−1X (b, τ). Then π−1

X (H(τ)) = H(τ) ∈MX by

condensation of H. Since H relativizes well, H(b) ∈MX [G]. This means we can define H+(b) to be

πX(H(b)). We need to see that H+(b) is countably Σ-iterable in V [G]. So let π : N → H+(b) with

N countable transitive in V [G] and π(b∗) = b. Let X ⊂ Y ∈ S be such that ran(π) ⊆ ran(πY ); then

H(π−1Y (b)) ∈MY [G] and there is an embedding from N into H(π−1

Y (b)), so N has an (ω1, ω1 + 1)-

Σ-iteration strategy. The definition doesn’t depend on the choice of X and it’s easy to see that H+

satisfies the conclusion (Exercise: Verify this).

For (2), let M ∈ HV [G]κ+ be transitive and τ ∈ HV

κ+ be a Col(ω, γ)-term for M . We define

the extension Λ+ of Λ as follows (it’s easy to see that there is at most one such extension). In

N = LΛ∗

κ+ [B,M], where B ⊆ κ codes tr.cl.(τ) and a well-ordering of tr.cl.(τ), Λ∗ is the unique

(κ+, κ+)-Λ-strategy for M =MΛ,]1 in V . Λ∗ exists by Lemma 6.3.

27In particular, a nice Col(ω, γ)-name for a real can be considered a subset of γ and hence a nice Col(ω, γ)-namefor RV [G] is an element of HV

γ+ .

28

Page 29: CORE MODEL INDUCTION NOTES - univie.ac.at

Let Ttr.cl.(τ) be according to Λ∗ and be defined as in Definition 5.36. Note that (κ+)N < (κ+)V

by ¬�κ and the fact that �κ holds in N , so Ttr.cl.(τ) ∈ N and has length less than o(N) = κ+. Let

R be the last model of Ttr.cl(τ) and note that by the construction of Ttr.cl.(τ), M is generic over R.

Let U ∈M be a tree according to Λ+ of limit length, then set Λ+(U) = b where b is given by (the

proof of) [16, Lemma 4.8] by interpreting Λ over generic extensions of R. By a simple reflection

argument, it’s easy to see that Λ+(U) doesn’t depend on M . Exercise: Verify this.

This completes the construction of Λ+. It’s easy to see that F+ has branch condensation.

Exercise 6.6. Formulate and prove analogs of the above lemmata for situations (ii) and (iii).

7. The J 7→ MJ,]1 step

Let J be a Σ-cmi-operator for some Σ and J is defined on a cone above x ∈ HVγ+ . We now proceed

to construct the operatorMJ,]1 : x 7→ MJ,]

1 (x). The domain ofMJ,]1 will be the same as the domain

of J . We denote MJ,]0 (x) for the least E-active, sound J-mouse over x.

Lemma 7.1. For every A bounded in κ+, MJ,]0 (A) exists.

Proof. By Lemma 6.3, it’s enough to show that if B is a bounded subset of γ+, then MJ,]0 (B)

exists. Suppose not. Let M = LJ [B] (by LJ [B, we mean LJκ+ [B], and so M has ordinal height κ+).

Note that κ+ > (κ+)M because �κ holds in M but fails in V .28 On the other hand, by Jensen

covering theorem, (κ+)M = κ+. Contradiction.

Lemma 7.2. Suppose A is a bounded subset of κ+. Then MJ,]1 (A) exists and is (κ+, κ+)-iterable.

Proof. It suffices to show MJ,]1 (a) exists for a a bounded subset of γ+ (with a coding x). Fix

such an a and suppose not. Then the Jensen-Steel core model (cf. [1]) KJ(a) exists2930. Let

ξ = (κ+)KJ (a). Since κ+ = o(KJ(a)) > γ+ is a limit of cardinals in KJ(a) (by the fact that �κ

holds in KJ(a) but ¬�κ holds in V ), so ξ < κ+. Weak covering (cf. [1, Theorem 1.1 (5)]) gives us,

cof(ξ) ≥ |ξ| ≥ κ. (7.1)

7.1 in fact implies cof(ξ) = κ. But κ is singular so this is impossible.

Lemmas 6.3, 6.5, and 7.2 allow us to extend MJ,]1 to H

V [G]κ+ .

Exercise 7.3. Formulate and prove the analogs of the above lemmata for κ in (ii) and (iii).

Exercise 7.4. Give J as above. Let J0 = J , Jn+1 = MJn,]1 for all n. Suppose Jn is defined for

all n. Show that the operators MJ,]n are defined for all n.

28In fact, cof(κ+)M < κ because κ is singular.29By our assumption and the fact that J condenses finely, Kc,J(a) (constructed up to λ+) converges and is

(λ+, λ+)-iterable. See Lemma 5.16. We then can use the KJ -existence dichotomy [11, Theorem 3.1.9] to concludeKJ(a) exists.

30One can also work with the stable core models as done in [11].

29

Page 30: CORE MODEL INDUCTION NOTES - univie.ac.at

8. The induction

Again, let κ etc be the objects associated with (i). We will prove W ∗α+1 assuming W ∗α, for critical

ordinals α.

8.1. The successor and countable cofinality cases

We assume W ∗α for α as in case (A) (α = η + 1 for η critical), or case (B)(a) (cof(α) = ω). In

particular, we know Jα(R) � AD and that for all β ≤ α,W ∗β holds.

We first do the proof below for α satisfying (B)(a). Let α = supnαn, where αn < α begins a

gap. Let Γ = ΣJα(R)1 and Γn = Σ

Jαn (R)1 . It is easy to see that:

• CΓ =⋃nCΓn .

• For any real z, any set A ∈ Γ(z), there is a sequence 〈An : n < ω〉 such that An ∈ Γn(z) and

A =⋃nAn.

For each n, we have the sequence 〈Jmn : m < ω〉 witnessing W ∗αn+1. Furthermore, we take

J = J0n to be a fine-structural mouse operator and Jmn is the operator MJ,]

m . The domain of these

operators is the cone in HV [G]κ+ above some set An ∈ HV

γ+ .

Definition 8.1. For any A ∈ HVγ+ coding ⊕An (⊕An ∈ L1[A]), let J0

α(A) be the least M� Lp(A)

such that for some γ ≤ o(M), M|γ is ZFC−-model closed under all Jmn for all m,n. We then

define Jnα(A) as MJ0α,]n (A), if this can be done.

By the previous section, Jnα is defined on the cone above A over HV [G]κ+ and each Jnα relativizes

well and determines itself on generic extensions.

Lemma 8.2. If Jnα is defined on the cone above A over HV [G]κ+ for each n, then W ∗α+1 holds.

Proof. Fix l < ω. In V [G], let U be a set of reals in Jα+1(R). Let ϕ be a Σk-formula, and z ∈ Rbe such that y ∈ U ⇔ Jα(R) � ϕ[y, z].

Let ρ ∈ HVγ+ be such that ρG = z. Set

P = Jk+l+2α (A, ρ).

Let Σ be the canonical strategy of P. We may also assume W is a universal Γ(z)-set and W =⋃nWn, where Wn is a universal Γn(z)-set. Note that P[G] is closed under CΓn for each n by the

closure of P (this is why we close P under the Jmn ’s).

Let δ0 < · · · < δk+l+1 be the Woodin cardinals of P (and hence of P[G]). We have for each n,

there is a term τn ∈ P[G]Col(ω,δk+l+1) for the universal Γn-set Wn witnessing Wn is term captured

at δk+l+1 by P[G]. This follows from Lemma 4.10. We can then define a term τ ∈ P[G]Col(ω,δk+l+1),

using τn’s, for W . Using Lemma 4.5, we can define a term σ ∈ P[G]Col(ω,δl+1) for U witnessing U

is term captured at δl+1 by (P,Σ).

30

Page 31: CORE MODEL INDUCTION NOTES - univie.ac.at

Exercise: Using σ, verify that we can build trees S, T witness that U is Suslin captured at δl+1

by (P,Σ). Basically, S builds (y,Q, h, π) such that y ∈ R, π : Q→ P[G]|γ is elementary, where γ is

a fixed limit ordinal > δl+1, P[G]|γ is closed under the CΓn ’s, h is Q-generic for Col(ω, π−1(δk+l+1)),

and y ∈ π−1(σ)h. T does a similar thing but for the term that captures the complement of U .

The case (A) is handled similarly. Let Γk = Σk(Jη(R)) and let Γ = Σ1(Jα(R)). Note that

Γ ⊆⋃k Γk. This fact is used to show that if M is closed under CΓ, then for any real x ∈ M, any

κM-cardinal, for any Γ(x)-set of reals A, there is a term τ ∈ MCol(ω,κ) that captures A. We can

then run the proof of the above lemma.

8.2. The uncountable cofinality cases: inadmissible gap

Suppose α is a limit ordinal of uncountable cofinality and Jα(R) is admissible as witnessed by

φ(v0, v1) and x ∈ R, that is, φ is Σ1 and

∀y∃β < αJβ(R) � φ[x, y],

and letting β(x, y) be the least such β for y, then α = sup{β(x, y) : y ∈ R}.Let G ⊆ Col(ω, γ) be V -generic as before and τG = x, here τ ∈ HV

γ+ . Let p ∈ G forces all

relevant statements. Let A ∈ HVγ+ be transitive, self-wellordered, and code τ in some simple way

(i.e. τ ∈ L1[A] and a well-ordering of A is in L1[A]). Call such an A suitable. For any A-premouse

M, any g×h ⊆ Col(ω, γ)×Col(ω,A),M[g][h] can be construed as a z-mouse for a real z = z(g, h)

obtained from g, h,A and codes g, h,A in some simple fashion.

There is a term σA for a real defined from A inM such that whenever g×h is generic as above,

then

(σg×hA )0 = τ g,

and

{(σg×hA )i : i > 0} = {ρg×h : ρ ∈ L1[A] ∧ ρg×h ∈ R}.

For n < ω, let

ϕ∗n(v) ≡ ∃γ(Jγ(R) � ∀i ∈ ω(i > 0⇒ φ(v0, v1)) ∧ (γ + ωn) exists. (8.1)

Let ψ be the sentence in the language of A-premice such that for any A-mouse M: M � ψ

iff whenever g × h is M-generic as above, and p ∈ g, for any n, there is γ < o(M) such that

M[z(g, h)]|γ is a 〈ϕ∗n, σg×hA 〉-witness.

Definition 8.3. For any suitable A, let J0(A) be the least M � Lp(A) satisfying ψ if exists.

Otherwise, J0(A) is undefined.

Lemma 8.4. For any suitable A, J0(A) is defined. Furthermore, J0(A) is OD(A) in Jγ(R) for

some γ < α.

31

Page 32: CORE MODEL INDUCTION NOTES - univie.ac.at

Proof. In V [G], using Wα and cof(α) > ω, we have a mouse N over (A,G) such that whenever

H ⊆ Col(ω,A) is N -generic, N [H] (as a mouse over z(G × H)) is a 〈ϕ∗n, σG×HA 〉 for all n and

furthermore, N (N [H]) has an iteration strategy in Jα(R).Exercise: Think through this.

Let P be the structure constructed over A from the extender sequence of N (this is the so-called

P (S) constructions, cf. [14]). One can show P is an iterable A-mouse such that P[G] = N . Hence

P ∈ V and P � Lp(A).

We need to verify P � ψ. This tells us J0(A) is defined. Exercise: Verify this.

Remark 8.5. Let A be suitable, G,H be as above, and M = J0(A). For each m, let βm be the

least β such that Jβ(R) � ϕ∗m(σG×HA ), then letting γm be the least γ such that M|γ[z(G,H)] is a

(ϕ∗m, σG×HA )-witness, then M|γm has iteration strategy in Jβm+k for some k < ω. This uses the

definition of ϕ∗m.

Lemma 8.6. J0 relativizes well.

Proof. Let A,B be suitable and A ∈ L1[B]. Let N be any transitive ZFC− model such that

J0(B) ∈ N . We will show how in N , one can compute J0(A) from J1(B).

Let G × H be J0(A)-generic for Col(ω, γ) × Col(ω,A). Let γm be the least γ such that

J0(A)|γ[z(G,H)] is a 〈ϕ∗m, σG×HA 〉-witness. Then o(J0(A)) = supmγm. It suffices to show we

can compute J0(A)|γm from J0(B) in N for each m. Fix such an m.

Let βm be the least β such that Jβ(R) � ϕ∗m(σG×HA ). By Remark 8.5, J0(A)|γm has an iteration

strategy Σ in Jβm+k(R) for some k < ω. In fact, Σ is OD(A) there.

Let K be such that G×K is Col(ω, γ)×Col(ω,B)-generic for J0(B). Assume also H is coded

into K. If β∗ is least such that Jβ∗(R) � ϕ∗m(σG×KB ), then βm ≤ β∗ (Why?).

We then have that there is an initial segment N � J0(B) that can compute the theory of

Jβm+k(R). This gives J0(A)|γm ∈ J0(B). This computation is uniform from parameters A, J0(B).

Let J = J0. Let Jn for j > 0 be the MJ,]n -operator. By Sections 6 and 7, Jn is defined on a

cone above τ in HV [G]κ+ .

Lemma 8.7. Suppose Jn is defined on a cone above τ in HV [G]κ+ . Then W ∗α+1 holds in L(R) (in

V [G]).

Proof. Let U be a set of reals in Jα+1(R) and k < ω. We want to construct a (k, U)-coarse Woodin

mouse. Suppose U is Σn-definable over Jα(R) from real parameter z. Let ρ be such that ρG = z.

Let P = Jk+n+3(TC(τ, ρ)). We show that P[G] is the desired witness.

In the following, we assume n = 1 (and leave the general case n > 1 for the reader to fill in).

Let ψ be a Σ1 formula defining U (from z). Let Σ be the canonical strategy for P (and P[G]). Let

δ0 < · · · < δk+3 be the Woodin cardinals of P.

Claim 8.8. There is a term U ∈ P [G]Col(ω,δk+2) that witnesses (P[G],Σ) captures U at δk+2.

32

Page 33: CORE MODEL INDUCTION NOTES - univie.ac.at

Proof. Let A = P|δk+3. Then P = J0(A). Whenever H is Col(ω,A)-generic over P[G], then for

each m, there is some γ such that P[z(G,H)]|γ is a 〈ϕ∗m, ϕG×HA 〉-witness. So P[z(G,H)]|γ has a

tree T such that letting Th(H, γ, k) be the Σk+3-theory of the least level of L(RP[z(G,H)]|γ that

satisfies θk[z] (here ϕ∗m plays the role of θ).

We define U as follows. For any h ⊆ Col(ω, δk+2) be P[G]-generic, for any real y ∈ P[G, h]:

y ∈ Uh iff for any l ⊆ Col(ω, δk+3) P[G, h]-generic, there is some γ such that

∀σ ∈ L1[P[G, h]|δk+3] ϕ(x, σl) ∈ Th(h× l, γ, k)

then

ψ(y, z) ∈ Th(h× l, γ, k).

Exercise: Verify that Uh ⊆ U . (main point is Th(h× l, γ, k) gives the theory of the first initial

segment of L(R) at which φ[x, σl] is verified to be true for all σ ∈ L1[P[G][h]|δk+3]).

For the converse, let y ∈ U ∩ P[G][h]. Let ξ < α be least such that Jξ(R) � ψ[y, z]. Pick

u ∈ R such that β(u, x) ≥ γ. Let i : P[G, h] → Q[G, h] be a u-genericity iteration at δk+3. Let

l ⊆ Col(ω, i(δk+3)) be Q[G, h]-generic such that u ∈ Q[G, h, l]. Let H be such that Q[G, h, l] =

Q[G,H] and let Q[z(G,H)]|γ∗ be a 〈ϕ∗m,ΣG×HA 〉-witness. As there is a canonical term σ for u in

L1[Q[G, h]|i(δk+3)], ϕ[x, u] ∈ Th(H, γ′, k). By the choice of γ′, Th(H, γ′, k) is the Σk+3-theory of

Jξ(R) restricted to real parameters in L1[Q[G,H]|i(δk+3)], and hence ξ ≥ β(x, u) ≥ γ. This gives

y ∈ i(U)l. This gives y ∈ U l for any l ⊆ Col(ω, δk+3) be P[G, h]-generic.

Exercise: Now complete the proof by building absolutely complementing trees T,U from U .

8.3. The uncountable cofinality cases: admissible gap

Outline:

Assume α, β are as in case (C)(a) or (C)(b). Using Theorem 3.4, we may assume W ∗β holds.

Let Γ = ΣJα(R)1 . Let n be the least such that ρn(Jβ(R)) = R and U be a universal Σ

Jβ(R)n . Using

Theorem 3.5, there is a sjs A = {Ai : i < ω} suc U =⋃iA2i.

We want to construct a pair (P,Σ) such that

• P is Γ-suitable (cf. Definition 5.52). Let δ = δP be the Woodin cardinal of P.

• For each i, there is a term τi ∈ PCol(ω,δ) witnessing (P,Σ) term captures Ai at δ. More

precisely, whenever j : P → Q is according to Σ, whenever g ⊆ Col(ω, j(δ)), Ai∩Q[g] = j(τi)g.

We will let τQi be the term for Ai whenever Q is an iterate of P.

• Furthermore, Σ is “guided” by A, in other words, whenever T is according to Σ, suppose

b = Σ(T ). Then either T is short and b is the unique branch such that Q(b, T )�LpΓ(M(T ))

or T is maximal and b is the unique branch c such that iTc (τPi ) = τQi .

33

Page 34: CORE MODEL INDUCTION NOTES - univie.ac.at

We then construct the mouse operators Jn = MΣ,]n : x 7→ MΣ,]

n (x). These operators will be

defined on a cone above P in HV [G]κ+ . Collectively, they witness W ∗β+1.

Term capturing and strategies with fullness preservation:

Let Γ = ΣJα(R)1 . For A a transitive, self-wellordered set in HV

γ+ , we write LpΓ(A) for the

collection of A-miceM such that ρ(M) = o(A),M is ω1-iterable in Jα(R). We define Γ-1-suitable

(or just Γ-suitable) just as in Definition 5.52. Since Γ is fixed throughout this section, if P is

Γ-suitable, we simply say P is suitable. We say a tree T on a suitable P is based on δP if all

extenders used in T are taken from the extender sequence of P|δP or its images along the tree.

Recall from Lemma 4.10 that if P is a suitable premouse over some real z, and if B ∈ OD<β(z),

then for each ξ ≥ δP , there is (a canonical) term τPB,ξ ∈ PCol(ω,ξ) such that whenever g ⊆ Col(ω, ξ)is P-generic, B ∩ P[g] = (τPB,ξ)

g.

Theorem 8.9 (Term-relation condensation, Woodin). Let z be a real and P be suitable premouse

over z. Let A be a self-justifying system containing a universal Γ-set and for each B ∈ A, B ∈OD<β(z). Suppose π : Q → P is Σ1-elementary such that

∀B ∈ A∀ξ ≥ δPτPB,ξ ∈ rng(π).

Then the following hold.

(i) Q is suitable and forall B ∈ A, π−1(τPB,ξ) = τQB,ξ∗, where π(ξ∗) = ξ.

(ii) rng(π) is cofinal in δP .

(iii) If δP ⊆ rng(π), then P = Q and π is the identity.

Proof. For part (i), the second clause follows from Lemma 4.10 and the fact that for any P-cardinal

ξ ≥ δP , CΓ(HPξ+) ⊆ P. For the first part, note that being Γ-full is a Γ-dual fact. Exercise:

Complete the details here.

For part (ii), let λ = sup(rng(π)∩ δP . Let R be the transitive collapse of the set of x such that

x is the unique a such that a = ψP|ξ(~η, ~τ), where ~η ∈ λ<ω and ~τ is a finite subset of {τP|ξB,ξ∗ : B ∈A ∧ ξ∗ < ξ}. Let σ : R → P be the uncollapse map. Exercise: Using the regularity of δP , verify

that σ � λ is the identity and σ(λ) = δP .

From part (i), R is suitable. From the exercise, R|λ = P|λ and σ(λ) = δP , so R � λ is Woodin.

But R is suitable, so LpΓ(R|λ) = LpΓ(P|λ) � λ is Woodin. This is a contradiction.

Part (iii) follows from the proof of part (ii).

Definition 8.10. Let P be suitable and T is a normal tree of length < γ+ based on δP . T is short

if for all limit ξ ≤ lh(T ), LpΓ(M(T |ξ)) � δ(T |ξ) is not Woodin. Otherwise, we say T is maximal.

Definition 8.11. Let P be suitable and Σ is a γ+-iteration strategy for P in V [G]. Σ is Γ-fullness

preserving (or just fullness preserving) if whenever a normal tree T is based on δP and is according

to Σ with last model Q, then

34

Page 35: CORE MODEL INDUCTION NOTES - univie.ac.at

1. either T does not drop in model and Q is suitable,

2. or the P-to-Q branch drops (in model) and Jα(R) � Q is ω1-iterable.

Remark 8.12. LpΓ can “track” a Γ-fullness preserving strategy Σ in the sense that if T is according

to Σ and is short, then LpΣ(M(T )) identifies the unique branch b for T such that Q(b, T ) �

LpΓ(M(T )); otherwise, if T is maximal, even though LpΓ cannot identify Σ(T ), it can tell the

final model MTΣ(T ) namely LpΓω(M(T )).

Definition 8.13. A strategy Σ condenses well (or has hull condensation) if whenever T is according

to Σ and S is a hull of T , then S is according to Σ.

Definition 8.14 (A-iterability). Let G(ω, n, ω1) be the usual iteration game where players play out

a linear stack of n normal iteration trees. Let A ∈ OD<β(z). We say a suitable z-premouse Pis (weakly) A-iterable if for all n, there is a fullness preserving Σ for II in G(ω, n, ω1) such that

whenever i : P → Q is according Σ, then i(τPA,ξ) = τQA,i(ξ) for all ξ ≥ δP .

Exercise 8.15. Assume W ∗β . Show that for any z ∈ HCV [G], there is a suitable z-premouse that

is ω1-iterable with respect to short trees.

Hint: Assume not. Let z be a witness. Use reflection to reflect the failure of the conclusion to

some ξ < α and use W ∗α + the proof of Lemma 4.20 to get a contradiction.

Theorem 8.16 (Woodin). Suppose Γ is an inductive-like pointclass and ∆Γ˜ is determined. Suppose

mouse capturing holds for Γ. Let A ∈ Env˜ (Γ) and z ∈ R, then there is an A-iterable z-mouse.

Strategies that condense well and are guided by a sjs:

We say that P is weakly A-iterable where A is a countable collection of sets of reals if for any

finite F ⊂ A, P is weakly ⊕F -iterable.

Theorem 8.17 (Woodin). Let A be a countable collection of OD<β(z) sets of reals, where z ∈HCV [G]. Then there is a suitable, weakly A-iterable z-premouse.

Proof. For each finite F ⊆ A, let ΣF be the fullness preserving strategy for some z-suitable PFsuch that ΣF is weakly ⊕F -strategy (cf. Theorem 8.16).

Exercise 8.18. Show that the simultaneous coiteration of the pairs {(PF ,ΣF ) : F ∈ A<ω} ter-

minates successfully at some countable ordinal. Furthermore, letting RF be the last model of the

comparison tree UF according to ΣF , then PF -to-RF does not drop.

Let iF : PF → RF be according to F . From the exercise, it is easy to see that for any F,G as

above, RF = RG. Let R = RF . Then it is clear that R is weakly A-iterable.

Definition 8.19. Let A be a collection of sets in OD<β(z). Let P be a suitable z-premouse. Let

Σ be an ω1-strategy for P. Σ is said to be guided by A if Σ is fullness preserving and whenever Tis countable of limit length and is according to Σ, letting b = Σ(T ), then

35

Page 36: CORE MODEL INDUCTION NOTES - univie.ac.at

(a) if T is short, then Q(b, T ) exists and Q(b, T ) � LpΓ(M(T )), or

(b) if T is maximal, then b is the unique branch c such that iTc (τPA,ξ) = τMTcA,iTc (ξ)

, for all A ∈ A, for

all cardinals ξ ≥ δP .

Theorem 8.20 (Woodin). Let A, z be as in Definition 8.19. Suppose further that A is a sjs

containing a universal Γ-set. Then there is a z-suitable premouse P and a unique fullness preserving

strategy Σ guided by A; furthermore, Σ condenses well.

Proof. Let A = {Ai : i < ω}. Let P be as in the conclusion of Theorem 8.17. Let Σn be a fullness

preserving strategy for P that witnesses P is weakly ⊕i≤nAi-iterable. Let T be a normal tree of

limit length according to all Σn’s. Let bn = Σn(T ), and in : P →M(T )+ =MTbn be the iteration

embedding. For k < ω, let νk be the k-th cardinal of M(T )+ ≥ δ(T ) and set

Mk =M(T )+|νk,

τj,k = τM(T )+

Aj ,νk,

and γk be the supremum of ξ such that ξ is definable over Mk from points of the form τi,j for

i, j < k.

Exercise 8.21. Verify that: (i) γk’s are cofinal in δ(T ); (ii) for any k ≤ l, E is an extender of

length ≤ γk, then E is used in bk iff E is used in bl.

Define now:

ξ ∈ b⇔ ∃k∀l ≥ k ξ ∈ bl. (8.2)

Claim 8.22. b is cofinal in lh(T ).

Proof. Suppose not and let η =⋃b < lh(T ). Fix k such that lh(ETη ) < γk. All extenders used in

b have length < lh(ETη ), so by the Exercise,

b ⊆ bl for all l ≥ k.

By closure of the branches, η ∈ b (Why?). Let F be the extender applied to MTη along bk. So we

have crt(F ) < lh(ETη ). Furthermore, lh(F ) > γk since F is not used in b. But this implies rng(ik)

is not cofinal in γk. Contradiction.

Let Tk = ThMk(δM ∪ {τi,j : i, j < k}) and Sk be the corresponding theory defined over P from

the corresponding terms. Then

iTbk(Sk) = Tk (8.3)

as iTbk moves the terms τi,j correctly for all i, j < k.

Claim 8.23. iTb (Sk) = Tk for all k.

36

Page 37: CORE MODEL INDUCTION NOTES - univie.ac.at

Proof. Fix k and regard Sk as a subset of δP . Since ib is cofinal in δ(T ), it is enough to see that

iTb (Sk) ∩ lh(E) = Tk ∩ lh(E) for any extender E used along b. Fix such an E; let l ≥ k be such

that E is used in bl . By 8.3, iTbl(Sk) ∩ lh(E) = Tk ∩ lh(E). Since E is used in both b, bl, we are

done.

By Theorem 8.9(iii), P is coded by the Sk’s and M(T )+ is coded by the iTb (Sk) = Tk’s (more

precisely, P is pointwise Σ0-definable from ordinals below δP and terms τPi,j and similarly for

M(T )+). MTb is also coded by the iTb (Sk)’s, so

MTb =M(T )+.

Set Σ(T ) = b.

Finally, from the above argument we see that:

• Σ(T ) = b moves all the terms τPi,j ’s correctly.

• Σ is fullness preserving.

• For any hull (S, c) of (T , b), c = Σ(S). This follows from Theorem 8.9.

Going back to V - Boolean comparisons:

Let A, z,P,Σ as in Theorem 8.20. Let τ be a term such that τG = z; we take τ to be an element

of HVγ+ . By considering all possible finite variants of G and comparing the suitable mice associated

with them, we will produce a suitable τ -mouse N that does not depend on G. N will be in V and

will have a fullness preserving strategy Λ that condenses well and Λ � V ∈ V .

We now describe the method of boolean comparison that produces N . This is due to Woodin.

First, let p0 ∈ G be a condition that forces all relevant facts about τ . For each p ≤ p0, let

Gp = p ∪G � (ω\dom(p)). Then it is easy to see that Gp is V -generic and V [G] = V [Gp]. Let Apbe the corresponding sjs of OD<β(τGp) sets. Let zp = (τ,Gp) (so z = zp0). Let B =

⋃p≤p0Ap. Let

B be the symmetric term for B, so that

∀p ≤ p0BGp = B.

Let Np, Σp be terms such that p � the statement: “Σp is an A-guided, fullness preserving strategy

for the (τ, G) suitable mouse Np such that Σp condenses well”.

Let Np = NGpp and Σp = Σ

Gpp . Now we simultaneously compare the Np’s using strategies Σp’s.

This comparison makes sense since zp and zq are Turing equivalent so a zp-mouse can be regarded

as a zq mouse. The comparison is successful. Let ip : Np → N p∞ be the corresponding iteration

embedding. ip exists (Why?). N∞p and N∞q are mice over different sets but they have the same

extender sequence E∞. So we abuse notation and write N∞. N∞ is weakly A-iterable and hence

by Theorem 8.20 has a unique A-guided strategy Λ that is fullness preserving and condenses well.

37

Page 38: CORE MODEL INDUCTION NOTES - univie.ac.at

The key point is: the comparison above only depends on the set {Np : p ≤ p0}, but not any

enumeration of the set. Therefore, there are symmetric terms ˙E∞, Λ for E∞ and Λ. By the S-

construction, we can construct a mouse N over τ such that o(N ) = o(N∞) and for any ξ < o(N ),

N|ξ[G] = N∞|ξ. One can check that N ∈ V , is a suitable τ -mouse and Λ induces a γ+-strategy Ψ

for N such that Ψ is fullness preserving and condenses well.

Getting W ∗β+1: Let N ,Ψ be as in the previous section. Note that Ψ, being guided by A, is

not in Jβ(R) but it is in Jβ+1(R). Exercise: Verify this.

We then use the results in Sections 5,6,7 to show that the operators MΣ,]n exist for all n and

are defined on a cone over HV [G]κ+ over τ .

Given this, we obtain W ∗β+1 as before. Let U be ΣJβ(R)n -definable with real parameter z and

k < ω. Take z that codes (N , G) and let P = MΣ,]k+n(z). This is the desired (k, U)-witness. The

proof is more or less as before. We just mention one point: suppose U = ⊕n<ω,An∈AAn (Exercise:

Verify the general case).

Claim 8.24. Let Λ be the canonical strategy of P , defined all trees in HV [G]κ+ . For any ξ ∈ P , there

is a term U ∈ PCol(ω,ξ) such that whenever i : P → Q is according to Λ and l ∈ V [G] is Q-generic

for Col(ω, ξ), then i(U)l = U ∩Q[l].

Proof. U consists of (p, τ) such that p forces when j : N →M is the P |ξ-genericity iteration given

by Ψ ∩ P (this is definable over P ), then τ belongs to j(τNAi) for some i.

References

[1] Ronald Jensen and John Steel. K without the measurable. The Journal of Symbolic Logic, 78(3):708–

734, 2013.

[2] Donald A Martin. The largest countable this, that, and the other. In Cabal Seminar 79–81, pages

97–106. Springer, 1983.

[3] William J. Mitchell and John R. Steel. Fine structure and iteration trees, volume 3 of Lecture Notes in

Logic. Springer-Verlag, Berlin, 1994.

[4] Itay Neeman. Optimal proofs of determinacy. Bulletin of Symbolic Logic, 1(3):327–339, 1995.

[5] Itay Neeman. Optimal proofs of determinacy II. Journal of Mathematical Logic, 2(02):227–258, 2002.

[6] G. Sargsyan. Hod mice and the mouse set conjecture, volume 236 of Memoirs of the American Mathe-

matical Society. American Mathematical Society, 2014.

[7] G. Sargsyan and J.R. Steel. The mouse set conjecture for sets of reals, available at author’s website.

2014.

[8] Ernest Schimmerling. Coherent sequences and threads. Advances in Mathematics, 216(1):89–117, 2007.

[9] J. R. Steel. PFA implies ADL(R). J. Symbolic Logic, 70(4):1255–1296, 2005.

[10] J. R. Steel. Scales in K(R) at the end of a weak gap. J. Symbolic Logic, 73(2):369–390, 2008.

38

Page 39: CORE MODEL INDUCTION NOTES - univie.ac.at

[11] J. R. Steel and R. D. Schindler. The core model induction; available at https://ivv5hpp.uni-

muenster.de/u/rds/.

[12] John R. Steel. Scales in L(R). In Cabal seminar 79–81, volume 1019 of Lecture Notes in Math., pages

107–156. Springer, Berlin, 1983.

[13] John R. Steel. The core model iterability problem, volume 8 of Lecture Notes in Logic. Springer-Verlag,

Berlin, 1996.

[14] John R. Steel. Scales in K(R). In Games, scales, and Suslin cardinals. The Cabal Seminar. Vol. I,

volume 31 of Lect. Notes Log., pages 176–208. Assoc. Symbol. Logic, Chicago, IL, 2008.

[15] John R Steel. An outline of inner model theory. Handbook of set theory, pages 1595–1684, 2010.

[16] Nam Trang and Farmer Schlutzenberg. Scales in hybrid mice over R. Available at

math.unt.edu/∼ntrang/.

[17] Trevor Miles Wilson. Contributions to Descriptive Inner Model Theory. PhD thesis, University of

California, 2012. Available at author’s website.

39