49
19 The Structure of Languages Under Parameterized Reducibilities 19.1 Some Tools In this rather technical chapter, we will develop techniques which will enable us to have some insight into the structure of the recursive languages under the various reducibilities. The techniques and results of this chapter have a strongly recursion-theoretic flavor and require some rather sophisticated techniques from classical recursion theory. We will assume that the reader is noddingly familiar with the basic notions from classical recursion theory, such as Kleene's Arithmetical Hierarchy and the like, but not familiar with priority arguments. Briefly, we remind the reader that a language is called recursively enumerable if it is empty or the range of a recursive function, and the language is called recursive if both it and its complement are recursively enumerable. Of course, recursive languages correspond to decidable problems, and recursively enumerable ones correspond to "semidecidable" problems such as the halting problem. After Kleene we know that a language L is recursively enumerable iff it is ; that is, there is a recursive relation R such that for all x, X E L iff3yR(x, y). A language is called TIn if it is the complement of a language, and, hence, it is of the form x E L iff'V'yR(x, y). More generally, we can define languages to be (resp. TIn) iff there is a recursive relation Rand n alternating quantifiers QJ, ... , Qn with QJ existential (universal) such that R. G. Downey et al., Parameterized Complexity © Springer-Verlag New York, Inc. 1999

[Monographs in Computer Science] Parameterized Complexity || The Structure of Languages Under Parameterized Reducibilities

  • Upload
    m-r

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

19 The Structure of Languages Under Parameterized Reducibilities

19.1 Some Tools

In this rather technical chapter, we will develop techniques which will enable us to have some insight into the structure of the recursive languages under the various reducibilities. The techniques and results of this chapter have a strongly recursion-theoretic flavor and require some rather sophisticated techniques from classical recursion theory. We will assume that the reader is noddingly familiar with the basic notions from classical recursion theory, such as Kleene's Arithmetical Hierarchy and the like, but not familiar with priority arguments.

Briefly, we remind the reader that a language is called recursively enumerable if it is empty or the range of a recursive function, and the language is called recursive if both it and its complement are recursively enumerable. Of course, recursive languages correspond to decidable problems, and recursively enumerable ones correspond to "semidecidable" problems such as the halting problem. After Kleene we know that a language L is recursively enumerable iff it is ~J ; that is, there is a recursive relation R such that for all x,

X E L iff3yR(x, y).

A language is called TIn if it is the complement of a ~n language, and, hence, it is of the form x E L iff'V'yR(x, y). More generally, we can define languages to be ~n (resp. TIn) iff there is a recursive relation Rand n alternating quantifiers QJ, ... , Qn with QJ existential (universal) such that

R. G. Downey et al., Parameterized Complexity© Springer-Verlag New York, Inc. 1999

390 19. Structure of Languages Under Parameterized Reducibilities

Ll Lz ~

/\/\/ ~= L11 ~ ~

\/\/\ FIGURE 19.1. The Arithmetical Hierarchy

Finally, a language is called ~n iff it is both ~n and TIn. All of this leads to Kleene's arithmetical hierarchy in Figure 19.1.

Here, inclusions are from left to right, and via relativizations of the halting problem, all of the inclusions are proper. This is all connected with computation via the following. Let {<l>e : e E N} list all Turing procedures. For any set A, we can define the jump of A to be A' = {e : <l>e(A; e) t}. The halting problem is coded by 0'. The proof of the halting problem shows that for any set A, A <T A'. The "spine" of the arithmetical hierarchy is provided by the languages 0', all[= (0'),], 0', /I ••••

The degree of a language is the equivalence class of the language under Turing reducibility. Hence, the degree of A would be {B : B =T A}. We denote the degree of the recursive languages by 0, the degree of 0' by 0', etc. The principal fact we need is the following:

Lemma 19.1 A language L is ~n+l iff L ST o(n).

Thus, there is a natural connection between syntatic descriptions of a language and their computational complexity. For our purposes, we will also need the following.

Lemma 19.2 (Shoenfield Limit Lemma) A language L is ~2 iff there is a {O, 1}­valued recursive function f such that: (i)for all x, lim" f(x, s) =def g(x) exists (i.e., I{s : f(x, s) =1= f(x, s + 1)}1 is finite) and, (ii) g(x) = L(x), where we identify languages with their characteristic functions [meaning that g(x) = 1 iffx E LJ.

Proof. The proof is straightforward but instructive. Suppose L is ~2' Thus, there is a procedure \{I reducing L to 0'. Thus, for all x, x E L iff \{I (0'; x) = 1. We approximate \{I(0'; x). Let 0~ = {e : e S s /\ <l>e.s(0; e) H Define f(x, s) = 1 if \{Is (0~; x) t= 1, and f (x, s) = 0 otherwise. Then, clearly, f so construed does the job, since the amount of information used in a computation is finite. Specifically, we know that for all x, \{I (0'; x) = L(x). Now there is a largest element of 0' queried in the \{I (0'; x) computation. Call this u (x). At some stage s for all t :::: s

19.1 Some Tools 391

and all u' :::: u(x), U' E 0' iffu' E 0;.

Furthermore, for some SI ::: s, we know that

But then for all t ::: SI, it most be that

IJIt (0'; x) = L(x) = 1JI(0'; x).

Note that as f changes whenever IJI does, we also know that f(x, t) = L(x). For the reverse direction, suppose f satisfying (i) and (ii) exists. We define a

procedure r and a recursively enumerable language M so that reM) = L. Since M is recursively enumerable, it will then follow that M ::::T 0' and hence L E /').2.

We define M and r in stages. Stage O. Define Mo = 0 and ro(Mo; x) = f(x, 0) and define y(x, 0) = (x, 0) for all x :::: O. Stage S + 1. Define yes + 1, y) = (s + l,s + 1) for all y :::: S + 1. Now we consider only x :::: s. If f (x, s) = f (x, s + 1), change nothing for x; that is, set r( Ms+ I; x) = r (Ms; x) via the same computation, and keep y (x , s + 1) = y(x,s). If f(x, s + 1) #- f(x, s), enumerate y(x, s) into Ms+I allowing us the change rs+I(Ms+I; x) to be f(x, s + 1). Set y(x, s + 1) = (x, s + 1).

It is not difficult to see that lims y (x , s) exists and that, hence, r describes a procedure reducing L to M.

The proof of the above result has much in common with many proofs in classical recursion theory. One performs a construction where some object is built in stages. Typically, one has some overall goal that one breaks down into smaller subgoals for which it is argued that they are all eventually met in the limit. As an archetype for such proofs, think of Cantor's proof that the collection of all infinite binary sequences is uncountable. One can conceive of the proof as follows.

Suppose we could list the sequences as S = {So, SI, ... } with Se = Se.OSe.1 .. '.

It is our job to construct a binary sequence U = UoU I ... that is not on the list S. This should be thought of as a game against our opponent who must supply us with S. We shall construct u in stages, at stage t specifying only Uo ... Ut the initial segment of U of length t + 1. Our requirements are the decomposition of the overall goal into subgoals of the form

Re : U #- Se,

one for each e E N. Of course, we know how to satisfy these requirements. At stage e, we simply make Ue #- Se.e by setting Ue = 1 or 0, and making Ue = 1 iff Se.e = O. Hence, for all e, U #- Se; all the requirements are met. This is a contradiction to the fact that S supposedly lists all infinite binary sequences, as U is a binary sequence.

392 19. Structure of Languages Under Parameterized Reducibilities

Notice that if we define a real to be recursive if it has an infinite recursive binary (or decimal) expansion, then the above proof shows that there is no recursive enumeration of all the recursive reals.

Our proofs and constructions will be rather more complex than this easy result, but readers should keep the overall structure of the above in their minds when looking at the results to follow. Our construction will be in finite steps where an object is constructed stage by stage in finite pieces. The object will be constructed to satisfy a list of requirements {Re; e EN}. In our constructions, we will need one further obvious but important result. We need the following definition.

Definition 19.3 (Use) In a computation <I>(A; x) from an oracle A, we define the use of the computation upon input x to be the collection of strings z queried of A during the course of the computation. (If the computation does not halt, then we take the use to be undefined.) We denote the use of the computation to be u(<I>(A; x)).a

"This definition of use is slightly nonstandard in the sense that in classical recursion theory, the use of a computation is usually taken to be the largest element queried during the computation. The reader may decide to adopt this definition without much loss of readability.

Lemma 19.4 (Use Principle) Suppose that A and Bare languages and <I> (A; x) -J,.. Suppose that for all strings z E u(<I>(A; x)), A(z) = B(z). Then, <I>(B; x) =

<I>(A; x) via identical computations.

The proof is obvious. Since the languages are the same, that is, A(z) = B(z), for all queried strings z, they give the same answer, and hence the computations will be the same. Why is the use principle so important? The point is that often we wish to construct objects to beat certain oracle computations. If we "preserve the use" of the said computations, we will preserve the computation. But the use is only a finite portion and we will still have almost all the rest of the language to meet the other requirements. Here is a famous example using the "finite extension method," which is a primitive form of Cohen Forcing, and is the mainstay of much of traditional structural complexity theory.

Theorem 19.5 (Kleene and Post [300]) There exist degrees a and b both below 0' and such that alTb; that is, they are incomparible under Turing reductions.

Proof. We construct A = lims As and B = lims Bs in stages, to meet the requirements below for all e E N:

R2e : <l>eCA) =1= B,

R2e+! : <l>e(B) =1= A.

Note that if we so construe A and B, then A iT B since we meet all the R2e+! 's, and B iT A since we meet all the R2e 's. Hence, A and B will have incomparible

19.1 Some Tools 393

T degrees. The fact that A, B S:r 0' will come from the construction and will be observed at the end.

The argument is a finite extension in the sense that at each stage s, we specify a finite portion of A, namely A." and a finite portion Bs of B. They will be specified for all strings of length S: ts. The key invariant is that for all stages u 2: s, for all z iflzl S: ts, then A(z) = As(z) = Au(z), and B(z) = Bs(z) = Bu(z). Thus, in a finite extension argument, after stage s, we can only extend the portion of A (B) we have specified so far. [Here we abuse notation by saying that a set of strings C extends another D (C > D) iff for all z with Izl S: max{lz'l : D(z') defined}, C(z) = D(z); that is, the characteristic function of C extends that of D.] To put it another way, we cannot at a later stage change the sets on anything we have specified so far. Construction. Stage O. Set Ao = Bo = A (the empty string). Set to = O. Stage 2e+l. (Attend R2e) We will have specified A2e , B2e, and t2e, at stage 2e. Pick some string x, called a witness, of length > t2e, and see if there is a string _ extending A2e such that

<I>e(-; x) t .

If such a _ exists, choose the first such _ and set A2e+1 = _ [i.e., for all z with Izl S: I_I, let A(z) = -(z)]. For all q with Iql S: lxi, set B2e+l(q) to be the following: (i) B2e(q) if Iql S: IB2el, (ii) 1 - <l>e(-; x) if q = x, (iii) 0, otherwise. Finally, set t2e+l to be so long that it exceeds t2e, I_I, and Ixl.

If no such _ exists, then set A2e+l = A2e, B2e+l = B2e, and t2e+l = Ix I (which is> t2e + 1). Stage 2e+2. (Attend R2e+l) Proceed as we did in stage 2e+ 1 except with the roles of A and B reversed. End of Construction. Verification. To verify the construction, we prove that we meet Rj for all j, and in fact, we meet Rj at stage j + 1. This is proven by induction on j. First, note that for all n, tn+1 > tn. We suppose that for an induction, we have met Rj for all j < n by stage n, and the parameter tn is so large that it will protect all the computations for j < n. Without loss of generality, n = 2e. Now at stage n + I, there are two cases to consider. If there is a _ extending An with <l>e(-, x) t, our action is to adopt _ as our next An+l and cause <l>e(An+1 ; x) =1= Bn+l (x) by (ii). We will then set tn+l to be large enough that for all n' 2: n, for all z with z E u(<I>(An+l; x», An,(z) = An(z), and hence <I> (An+1 ; x) = <I>(A; x) =1= Bn+l (x) = B(x). The other case is that no such _ exists. Then, since A is an extension of An, it can only be that <I>(A; x) t, and, hence, in either case, we meet Rn.

Finally, we argue that A, B S:r 0'. Notice that the construction is, in fact, fully recursive except for the decision as to which case we are in at stage n. There, we must decide if there is some _ with a convergent computation. For instance, at

394 19. Structure of Languages Under Parameterized Reducibilities

stage 2e + 1, we must decide

3r, s[r > A2e /\ <l>e.s(r; x) .,j,].

This is a EI question and can be decided by 0'. [Specifically, use the s-m-n theorem to construct a recursive function f{Js(2e) such that for all z, CJ>s(2e)(Z) = 1 if3r, s[r > A2e/\ <l>e.s(r; x) .,j,], and f{Js(2e)(z) t otherwise. Then 3r, s[r > A2e/\ <l>e.s(r; x) .,j, ] iff s(2e) E 0'.]

Remark. We remark that the reasoning at the end of the above proof is quite common: 0' can answer any D..2 and hence any EI and 01 question.

Turning now to classical complexity theory, a famous application of the finite extension method is due to Baker et al. [48]. We need a technical tool.

Notation. [Listing Polynomial Time Procedures] Notice that there is an ef­fective listing of all polynomial-time Turing procedures obtained by listing r e ,: e E N with re a version of <l>e running in time Ixle + e; that is, for s :5lxle +e,wedefiner e.s(D;x) = <l>e,s(D;x),andfors > Ixle+e,we let re,s(D; x) = <l>e(D; x) only if <l>e,t(D; x) .,j" for some t :5 Ixle + e; and otherwise, we let re ... (D; x) = re,lxl'+e(D; x) = O. Notice that re has the property that for all e, for all x and oracles D,

reeD; x) .,j, in:5 Ixl e + e steps.

Theorem 19.6 ([48]) There exists a recursive set A such that N pA =1= pA.

Proof. We construct A via the finite extension method in stages. We also construct an auxiliary set B defined via

x E B iff 3y[lyl = Ixl /\ YEA].

Clearly, BEN pA . Let r e for e EN, list all polynomial-time procedures as above. We meet the requirements

Construction. Stage O. Define Ao = A and to = O. Stage e+ 1 (Attend Re) We have specified Ae and te. Find some n = n (e) sufficiently large so that 2n > ne + e and n > teo Choose x = x(e) = on as a witness for Re. Let A~ denote the empty extension of Ae meaning that A'(x) = A(x) for x with Ixl :5 te , and A'(x) = 0 otherwise. There are two cases. Case 1. re(A'; x) = 1. Action. Choose te+1 to be sufficiently large that it exceeds Te and n and the length of all strings in u(re(A'; x». Now, define Ae+1 = A~ for all strings of length :5 te+l. Case 2. r(A~; x) = O. Action. Now, since Ixl = nand 2n > ne + e, there is some string Z oflength n not

19.1 Some Tools 395

in u(re(A~; x» that is not addressed during the course of the computation. Our action is to choose te+1 large, as in Case 1, and then set Ae+1 (y) = A~(y) for all strings oflength ::s te+1 except for z. We put z into Ae+l; that is Ae+1 (z) = 1. End of Construction. Verification. Again, we work by induction on e. So at Stage e + 1, we choose an n so large that 2n > Ixln + e. Now, in the first case, notice that by the choice of te+ 1 and hence tq for q 2: e + 1, we have

[The fact that B(x) = 0 follows from the observation that no string of length n = Ixl is ever put into A.]

In Case 2, we have r(A~; x) = O. Now, since the only difference between A~ and Ae+1 on strings of length ::s te+1 is on z and we choose z, a string of length Ixl not addressed in the re(A~; x) = 0 computation, by the use principle, we must have

The last equality follows since we have put into A a string z of the same length as x which puts x into B. In either case, we meet the Re •

A more subtle generalization of the finite extension method is the priority method. We begin by looking at the simplest incarnation of this elegant tech­nique, the finite injury priority method. This method is somewhat like the finite extension method but with backtracking. It also bears some resemblence to the Gurevich-Harrington [252] game of LAR in their proof of Rabin's result that S2S has a decidable theory.

In any case, the idea is the following. Suppose that now we must again satisfy requirements Ro, R 1 , ••• , but this time we are constrained to some sort of effec­tive construction, so we are not allowed oracle-type questions in the construction. As an illustration, in the result below, we will construct recursively enumerable A and B with incomparable Turing degrees. Now the Kleene-Post method con­structs languages with incompatible degrees below 0', but they are not recursively enumerable. The reason that A and B are not recursively enumerable is that we satisfy the requirements in order. To do this, we are using an 0' oracle question at each stage. To make A and B recursively enumerable, somehow we must have a recursive construction where elements go into the sets A and B and never leave them. The key idea discovered independently by Friedberg [233] and Muchnik [357] is to pursue two or more strategies for the Re in the following sense. It seems that we need to know which option "Does 'l' exist or not?" to know which strategy to pursue. But the idea is that we first guess that no such 'l' exists for our witness x. This means that nothing is really done for the sake of Re (except to keep x ¢ Bs) unless we see a stage where some T ~ As exists. If such a stage occurs, then we will try to make A extend T and win, as before, by putting x into B if necessary. So whatever case occurs, we will win.

396 19. Structure of Languages Under Parameterized Reducibilities

The only problem with all of this is that this action will probably change B on x. This action may upset, "injure" in the standard terminology, some other requirement trying to preserve B. To make sure that everyone gets to be met, we put a "priority ordering" on all the Re and only allow Rj to injure Ri if Rj has higher priority than Ri. If Rj is injured at stage s, then we will "initialize" (meaning restart with a new large follower) the requirement Rj •

In general, in a finite injury priority argument, one has a list of requirements in some priority ordering. There are several different ways to meet individual require­ments Re. Exactly which way depends upon information that is not available to us but is "revealed" to us during the construction. The problem is that actions by one requirement can injure others. We must construe things so that only requirements of higher priority can injure ones of lower priority, and we can always restart the ones of lower priority once they are injured. In a finite injury argument, any Re requires attention only finitely often and we argue by induction that all the require­ments get an environment wherein they can each be met. We remark that there are much more complex infinite injury arguments where Ri might injure some Rj infinitely often. But the key there is that the injury is somehow controlled so that the coherence criterion is satisfied. All requirements eventually get an environ­ment where they can be met (Harrington's "golden rule."). The reader is referred to Soare [431] for an account of modern recursion theory and, in particular, an account of these beautiful techniques.

We now turn to the formal proof of the following theorem of Friedberg and Muchnik which solved the famous problem of Post, which asked if 0 and 0' were the only recursively enumerable degrees. (And if they were, it would mean that all semidecidable poblems were either decidable or the "halting problem in dis­guise.") Historically, the Friedberg-Muchnik Theorem was the first place where the priority method was used. We remark in passing that it is possible to solve Post's problem without a priority argument but with much more difficult techniques. (See Kucera [314].)

Theorem 19.7 ([233, 357]) There exist recursively enumerable languages A and B such that A and B have incomparable Turing degrees.

Proof. We will build A = UsAs and B = UsBs in stages to satisfy the same requirements as the Kleene-Post Theorem; that is, we make recursively enumerable A and B to meet the requirements

R2e : 4>e(A) =f. B,

R2e+! : 4>e(B) =f. A.

The strategy for a single R j. We begin by looking at the strategy for a single R j .

Without loss of generality, let j = 2e. (i) Initially, we will pick a new fresh number x = x (j) to follow Rj. This number is targeted for B, and, of course, we have x ¢ B oV •

(ii) We wait for a stage t > s to occur with 4>e.t(At ; x) ,J..= 0 = Bt(x). [Comment: If stage t does not occur, then we must have 4>AA; x) =f. B(x). For suppose

19.1 Some Tools 397

otherwise. Then, if <l>e(A; x) = B(x), since we never put x into B, it must be that <l>e(A; x) ,J..= O. Now, since all computations halt in a finite number of steps and some stage n must occur with An,(z) = A(z) for all n' > nand Z with Izl ~ max{lpi : p E u(<I>e(A; x))}, we must have a stage where the computation <l>e,,' (A;; x) = <l>e(A; x) isfinal. One can take t = t'.] (iii) Should stage t occur, we will let Rj require attention by performing the Kleene­Post action with A, taking the role of r; that is, we will set A'+I = A" but put x into B'+I - B", causing

In the construction below, we will then act to protect this with priority e. Note that when we take action (iii), we might injure other Rk trying to preserve

the use of <l>e",(B,; x') because x E u(<I>e",(B,; x'», and hence, the <l>e",(B,; x') computation will be using the fact that "x f/. B" which Rj has made false at stage t + 1. Definition. We say that Rj requires attention at stage s if j is least such that one of the following pertains. (i) Rj has no follower at stage s. (ii) Rj has a follower x(j, s) at stages and it is waiting and, furthermore, supposing that j = 2e,

<l>e,s(As; x(j, s» ,J..= 0 = Bs(x(j, s».

Construction. Stage O. Set Ao = Bo = 0. Stage s > O. Find the least j with Rj requiring attention. Adopt the appropriate case below. Case 1. (i) Pertains. Action. Find a large fresh number x (i.e., exceeding all numbers computations, etc. previously seen) and appoint x(j, s) = x as a waiting follower for Rj. Initialize all Rj' with j' > j. (That is, cancel all followers associated with Rj'.) Do nothing else. (So As = A,,_I and Bs = B,,-t. etc.) Case 2. (ii) Pertains. Action. Initialize all Rj' for j' > j. Set As = As-I, but set Bs = Bs- 1 U {x(j, s)}. Declare x(j, s) to be no longer waiting. End of construction. Verification. We prove by induction on j that: (a) Each Rj receives attention only finitely often. (b) lim" x(j, s) = x(j) exists. (c) Rj is met.

For an induction, suppose (a), (b), and (c) for all j' < j. Let So be a stage good for j; that is, for all s 2: so, for all j' > j, (a/) Rj' does not require attention at stage s. (b/) X(j', s) = xU', so). (c /) Rj' is met at stage s.

398 19. Structure of Languages Under Parameterized Reducibilities

If we choose So to be minimal, then it is clear that since no requirements have followers at the beginning of the construction, and lose them whenever higher priority ones receive attention, it can only be that Rj receives attention via (i) at stage So + 1, and is appointed a large fresh follower x = x(j, So + 1). By choice of So, x is never cancelled: The only requirements that could cancel x are Rj' for j' < j. There are two possibilities.

The first is that x is waiting forever. In this case, there is no stage t with <l>e,t{At ; x) ,\,= a = Bt{x). This means that either <l>t{A; x) t or <l>e{A; x) = 1 i= a = B{x). In either case, we win.

The second case is that (ii) pertains to Rj at some stage s > So + 1. In this latter case, we act to cause a disagreement at stage s, namely

Now, since we initialize all Rj' for j' > j, and new followers are always appointed large and fresh, it follows that this stage s disagreement is immortal.

In either case, Rj only receives attention at most twice more after stage So and is met. Furthermore, one sees that x (j, So + 1) = x (j, t) = x (j) for all t > so. This concludes the induction and hence the proof of the Friedberg-Muchnik Theorem.

We remark that the above is the very simplest of all finite injury arguments since it is what is called "bounded injury" in the sense that we can put a bound in advance on the number of injuries that Rj will have. In this case, the bound is 2j . There are examples such as Sack's splitting theorem (see Exercises 12) where we have no such bound. For one of the results of the next section, we will also need the infinite injury method, or, more precisely, the TI2 version of the infinite injury method. The Friedberg-Muchnik construction above can be viewed another way. We can attach versions of requirements to a "priority tree," in this case the tree 1: * , the tree of all binary strings. The idea is the following.

Attached to the empty string A is Ro. More generally, each node a has a "ver­sion"of Re where la I = e. Thus, there are 2e versions of Re. In this context, we refer to a on the tree as guesses. Each node a has two outcomes 0 and 1. The out­come a indicates that this verion of Re appoints a follower but never enumerates the follower into its target set (i.e., Case 2 does not hold for this version of Re). The outcome 1 indicates that the version of Re at guess a actually enumerates its follower into its target set. See Figure 19.2 for this setup.

Now, the idea is the following. Consider R I . The version guessing 0 believes that" Ro will appoint a follower then never again act." Thus, its action is to simply act immediately after its guess "appears correct"; that is, once it sees Ro appoint its follower, it can act immediately to appoint its follower. Its belief of the actual universe will turn out to be false only if Ro does, in fact, later act to enumerate its follower. At the outcome 1 of Ro. we will have a "backup" strategy which believes "Ro will in fact enumerate its follower." The backup strategy for R I, this version of RJ, will only act if its guess appears correct. Thus, it only acts after Ro enumerates its follower. Then and only then, this version of RI wakes up and

19.1 Some Tools 399

------------Ro

o - - - -R1

~ ~ o 0 )---~

AAAA For the Friedberg-Muchnik Theorem

- - - - - - - - - - - - No

- - - -N1

~ 00 f--- N2

AAAA

For the Minimal Pair Theorem

FIGURE 19.2. The Assignment of Priorities and the Outcomes

appoints its follower. The version guessing outcome 0 is now known to be wrong and would be canceled.

All of the above is simply extended inductively in the tree. This is a version of R4 with guess (J' = l--U-O--l would have as its strategy, "the (J' strategy," that it wait for Ro to enumerate, the version of R) guessing 1 to appoint a follower, the version of R2 guessing 1--0 to appoint a follower, and the version of R3 guessing IlJO to enumerate its follower before it acts.

The True Path (TP) of the construction is the actual way the construction is satisfied as seen by the tree. Thus, TP is the leftmost path visited infinitely often,

400 19. Structure of Languages Under Parameterized Reducibilities

where "leftmost" is measured in terms of lexicographic ordering (that is, a S-L r means that a is lexicographically less than r).

The reader might wonder what all this terminology is about. For the Friedberg­Muchnik Theorem, the payoff of new insight provided by this reorganization of the construction does not really balance the additional notational and conceptual burden. However, all of the above concepts were invented in the last 20 years to make infinite injury arguments comprehensible. The key difference is the follow­ing. In the above argument, the apparent true path TP .. , the guess of length s that appears correct at stage s, only moves left, since Re can only go from outcome 0 at a to outcome 1 at a. Hence, by the Limit Lemma (Lemma 19.2), the true path is recursive in 0'. In infinite injury arguments, TPs can move both left and right. Assuming that the tree is finitely branching, this means that TP is only recursive in 0".

In an infinite injury argument, again we have requirements {Re : e EN}. But now, the action of Re can be infinitary. Obviously, we cannot just initialize Re+! whenever Re acts, for then it would never be met. What happens is that we will have a version that guesses Re acts infinitely often. This backup strategy gets to act just in the stages when Re looks like it will be acting infinitarily. The requirements must be such that the golden rule will again be satisfied: There must be a version of Re+! which has a strategy that can live with Re acting infinitarily, and, conversely, this version of Re+! must not in the limit stop Re from being met.

We give one example and refer the reader to the Exercises for others (Exercises 13, 14, and 15).

Theorem 19.8 ([317, 470]) There exist nonrecursive recursively enumerable sets A and B such that for all sets C if C S-r A, B, then C is recursive. The degrees of these sets are said to form a Minimal Pair.

Proof. We construct A = UsAs and B = UsBs in stages to satisfy the following requirements for e E N:

Re: A i= We. Qe: B i= We'

Ni,j: <l>i(A) = <l>j(B) = f and f is total, implies f recursive.

Here, the reader is reminded that We = dom CPe is the e-th recursively enumerable set. We meet the Re and the Qe by a Friedberg-Muchnik-type strategy; that is, we shall pick a follower x, targeted for A in the case of Re, and wait until x enters We, ... Of course, should x never enter We, .. for any s, then x ¢ (We U A) and hence A i= We. Should x enter We,s for some stage s, then we can win forever by putting x into At at some t :::: s.

The tricky requirements are the Ni,j' We will first discuss how to meet a single Ni,j in isolation and then look at the coherence problems between the various requirements and the solution to these provided by the use of a tree of strategies.

For a single Ni,j, we will need the auxiliary functions

l(i, j, s) = max {x : Vy < x(<l>i, .• (A .. ; y) = <l>j, .. (Bs; y))}.

19.1 Some Tools 401

ml(i, j, s) = max{l(i, j, t) : t < x}.

Wecalll(i, j, s) the length of agreement function and caIl ml(i, j, s) the maximum length of agreement function. The maximum length of agreement function is a sort of "high-water mark" for lengths of agreement seen so far. We shall call a stage i, j-expansionary if the current length of agreement exceeds the previous high­water mark; that is, s will be called i, j-expansionary if l(i, j, s) > ml(i, j, s). Also, we let mu(i, j, s) denote the maximum element used in any computation below l(i, j, s). Following standard conventions, we will always suppose that the procedures are given sufficiently slowly that all uses are bounded by s at stage s.

The key idea for a single Ni,j is the following, Suppose that l(i, j, s) > x. Suppose that we allow some element y to enter As+1 below u(<I>i(As; x», but nothing else to enter A or B at Stage s + I. Then, if we do not allow any further elements::: mu(i, j, s) to enter A or B until the next i, j-expansionary stage t > s + 1, it can only be that Bt(z) = Bs(z) for all Z ::: mu(i, j, s), and hence <l>j,s(Bs; x) = <l>j,t(Bt ; x). But since t is expansionary, this means that

<l>i ... (Ai,s; x) = <l>i,t(At ; x) [= <l>j,t(Bt ; x) = <l>j.s(Bs ; x)].

That is, even though the <l>i,t(At ; x)-computations might have changed because y entered As+ I, the value of the computation on x remains the same, since the B side has not changed and hence must be giving the same answer as it did at stage s.

So, in summary, the idea is to only change one side of a computation between ex­pansionary stages and then argue that the "other side" will "hold" the computation at its current value until the next expansionary stage is found, when we will be free to enumerate into either side again. Thus, the Friedberg strategies guessing that l (i, j, s) --* 00 must only put numbers into A or B at expansionary stages. Finally, we remark that if no expansionary stage t > s is ever found, then l(i, j, s) fr 00

and hence <l>i(A) =I <l>j(B). Coherence. There is a problem with all of this. Consider two N -type requirements Nand N'. Now suppose that N has higher priority than N'. Now, N requests us to only put numbers into A or B during its expansion stages {Sl' S2, •.. }. Similarly, N' might request us to only put numbers in during stages {tl , t2, ... }. The problem is that these sets of stages might be disjoint. Then, N blocks us from putting numbers into A or B stages ti and N' blocks us from putting numbers in during stages Si. Hence, collectively, the pair block us from ever putting numbers into A or B.

This problem is overcome by the following observation. Given that this is a version of N' guessing that N has infinitely many expansionary stages, this version of N' can be such that it only acts during N's expansionary stages. This will force N"s expansionary stages to be nested within N's expansionary stages, In this way, the requirements are forced to cohere.

We now turn to the formal details, The Priority Tree. We use the tree T = {oo, f}* (see Figure 19.2). We assign N;,j to a on Tiff la I = (i, j). Also, if la I = 2e, then we also assign Re to a and if la I = 2e + 1 we assign Qe to a. For a requirement M we will write Mu for the version of M at guess a. Again, we use lexicographical ordering with 00 < L f.

402 19. Structure of Languages Under Parameterized Reducibilities

Definition 19.9 (a) We define the notions a-stage, ml(a, s), and a-expansionary by induction on la I. (i) Every stage s is a A-stage. (ii) Suppose that s is a r-stage with Irl = (i, j). Let l(r, s) = l(i, j, s). Define

ml(r, s) = max{O, l(r, t) : t is a r-stage < s}.

We say that sis r-expansionary if l(r, s) > ml(r, s) and declare s to be a r~oo­stage. If l(r, s) ::::: ml(r, s), declare that s is a r~f-stage. (b) We define T Ps to be the unique a of length s with s a a -stage.

Definition 19.10 (a) We say that RlT requires attention at stage s if We.s n As = 0 (where 2e = la I), s is a a-stage, and one of the following holds. (i) RlT currently has no follower. (ii) RlT has a follower x EWe,s'

(b) We similarly define QlT to require attention.

The Construction. Step 1. Compute T Ps • Initialize all versions of requirements at guesses r 1:.L T p. •. Step 2. Find the RlT or QlT of highest priority that requires attention at stage s. Without loss of generality, we will suppose this to be Rcr. Initalize all requirements at guesses, r with r 1:. a. Adopt the appropriate case below. Case 1. Definition 19.10 (i) holds. Action. Appoint x(a, s) = s to follow Rcr. (Remember that s is larger than all computations seen so far by convention.) Case 2. Definition 19.10 (ii) holds. Action. Enumerate x into As+ \. End of Construction. Verification. Let T P be the leftmost path visited infinitely often; that is, AcT P, and for all r, if reT P, then r~oo C T P iff 300 s (r~oo C T P s). Otherwise, r~fcTP.

Lemma 19.11 All the Re and Qe have versions that are met, andforall r ::::: T P, R-r; (or Q -r;) acts only finitely often.

Proof. This lemma is proven by induction on e. We consider Re. Let acT P with la I = 2e. Go to a stage So where for all r <L a and s > so: (i) if r ct a, s is not a r-stage; (ii) if M is a Q or R requirement assigned to r, then M will not act at stage s.

Assuming So to be least and a a-stage, we can assume that either We •so n As 0:1= 0 (in which case we are done), or RlT receives attention via Case 1 getting a follower x at stage So. This follower is immortal by choice of So and the induction hypothesis. It will succeed in meeting Re as in the basic module since it has priority at each a-stage.

Lemma 19.12 All the Ni•j have versions that are met.

19.1 Some Tools 403

Proof. Again, we prove this by induction. Let aCT P with lal = (i, j). Choose So as in Lemma 19.11, so that no higher-priority action can cause grief to N". Now, if a~f C T P, we are done since Iiminf l(i, j, s) < 00 and hence <I>;(A) :f. <l>j(B).

So, wesupposethata~oo C T P. To compute cl>;(A; x) recursively, find the least a~oo-stage S = sex) > So such that l(a, s) > x. Note that this can be computed recursively from the parameters So and a. We claim that cl>i(A; x) = <l>i ... (A; x). To see this, note that by Step 1 of the construction, we wiII initialize all r 10 T p .. at stage s. In particular, at stage s by choice of So and since we appoint new foIIowers to be large, and we are never above or left of a~oo after stage So, the only numbers which are below s and can enter A or B after stage s are followers associated with y ;2 a~oo. Such followers can only enter their target sets at a~oo stages s ::: So; that is, they can only enter at a-expansionary stages. In particular, as with the basic module, we can argue that for any a~oo-stage t ::: s, at most one number :::: mu(a, t) can enter A or B before the next a~oo-stage t' > t.

Thus, exactly as in the basic module, we have that

cl>i,s(A .. ; x) = cl>;,t(At ; x) = cl>j, .. (Bs; x)

= <l>j,t(Bt;x) = cl>;(A;x) = cl>j(B;x),

for alI aoo-stages t > s.

Before we look at the structure of parameterized reducibilities, we need one further technique: Delayed Diagonalization. This technique was developed in com­plexity theory, although it has found uses in classical recursion theory such as in Downey and Shore [184]. The technique works roughly as follows, We aim to satisfy some requirement R e , but are constrained in the construction so that we are not able to use an oracle to alI ow us to decide how to diagonalize at the time we wish. So what we do is to set things up so that if we pursue some strategy long enough, eventually we will meet our objectives. For example, we might want to diagonalize against some requirement and we know that we will do so on one of an exponential number of strings, Now, maybe the action we need to do for Re+! depends upon exactly which string achieved the diagonalization, The trouble is, we might be constrained to work in polynomial time, so at the current stage, we can not know which string achieved the diagonalization. We can only look at polynomially many of the exponentially many strings. The idea is that we "delay" the construction in the sense that we simply keep extending the set marking time until a stage is found where, "looking back," we can see which string did the con­struction. For example, although at stage s, there are exponentially many strings of length s, at stage t = 2" there are only polynomially many strings of length s.

We give the classical example: Ladner's Density Theorem. We will need the folIowing notation. Notation. (i) We denote by Zn the n-th string in the lengtMexicographic ordering of:E* . (ii) We let A EB B = {l~x : x E A} U {()'y : y E B}.

We also need the following easy technical lemma.

404 19. Structure of Languages Under Parameterized Reducibilities

Lemma 19.13 (Slow Enumeration Lemma) (i) Let D be a recursive language. There exists a polynomial time computable function f such that the range of f is D. (ii)Furthermore, we can ask that for all z, if y E U(zo), ... , f(zs)} and \f(Zs+I)\ > \y\, then for all s' > s, \f(z",)\ > \y\ (The Length Increasing Property).

Proof. As D is recursive, there is an injective recursive function g with range D. We will have f as a slowed-down version of g. We define g in stages. Stage O. (i) Let g(A.) = f (A.). Let nCO) = 1. Declare zn(O) to be the waiting string. Stage s + 1. We will have a "waiting" string Zn = Zn(s). See if g(Zn) tin :s s + 1 many steps. If not, then set n(s + 1) = n(s) and define f(p) = fW) for all strings p oflength s + 1. If g(Zn) t in :s s + 1 many steps, then for all strings p oflength s + 1, define f(p) = g(Zn)' In this latter case, set n(s + 1) = n(s) + 1.

Clearly, the construction of f gives a polynomial-time computable function such that the range of f is D. (ii) First arrange that g has the length increasing property, and then use the construction of (i).

Theorem 19.14 (Ladner's Density Theorem [321]) The polynomial-time degrees of recursive languages are dense; that is, let A and B be recursive languages with A <~ B for q E {T, m}. Then there exists a recursive language C such that C :Sm B and A <~ A EB C <~ B.

Proof. We verify only the case q = T. The other one is virtually the same. We must build C :Sm B so that we satisfy the requirements below:

Re : ['e(A EB C) =1= B,

Qe : ['e(A) =1= C.

Actually, the strategy for meeting all the Re is obvious. Simply let C = 0. It cannot be that ['e(A EB 0) = Blest B :s~ A. Similarly, the strategy for meeting all the Qe is again obvious. This time, simply set C = B and then we must have ['e(A) =1= C. The difficulty is in the resolution of these two ideas, since making C = 0 is patently incompatible with making C = B.

Ladner's idea is based on the following observation: We know that if we make C = 0, then it must be that for some x,

['e(A EB C; x) =1= B(x).

The idea is that we will set C (x) = 0 = 0(x) long enough so that some x with ['e(A EB C; x) =1= B(x) appears, and then switch to making C = B on longer strings for the next requirement which will be of the Qe type. So the set C will resemble, in the phrase of Wolfgang Maass, B with holes in it; that is, it will appear as B for long intervals and then as 0 for long intervals. Which it currently emulates is decided by a polynomial-time relation D(x), which, in tum, is decided by the

19.1 Some Tools 405

delayed diagonalization "looking back" technique. Thus, we will define C via D via

x E C iff [x E C /\ D(lxi) = 1].

Clearly then, if D is polynomial-time-computable, then C ~~ B since to decide if x E C, we first compute D(lx i). If this is 1, we have x E C iff x E B and if this is 0, we have x fj C.

Now by the Slow Enumeration Theorem, we will have A and B given as the range of polynomial-time-computable functions! and g, respectively. For convenience, we will write As = {f(0), ... , !(s)} and Bs = {g(O), ... , g(s)}. Here, we are writing Zi = i for convenience, and remind the reader of the length increasing property of these enumerations. Assuming that D is polynomial-time-computable, this implicitly gives Cs via Bs.

Definition 19.15 (Certified Computations) For a polynomial-time procedure r, we will say that a computation of the form

rs(As $ Cs; x) = or =f. Bs(x)

or of the form rs(A.,; x) = or =f. Cs(x)

(which halts in ~ s steps) is certified at stage s if we have that the latest elements of As and Bs (and hence all future elements of A - As and B - Bs) have lengths exceeding the use of the relevant computation.

The crucial point of certified computations is that thay are final. In particular, they are not giving us the wrong information purely because of the slowness of the enumerations of A or B. The above definition is essential to virtually all applications of delayed diagonalization and should be digested before proceeding. Construction. The construction is specified by the definition of D below. For convenience we will also have another parameter, namely req(s), the current requirement we are meeting. Stage O. Set req(O) = Ro, and D(O) = 0 (so that C is emulating 0 locally). Stage s + 1. There are two cases. Case 1. req(s) = Re for some e [and hence it will be that D(s) = 0]. Action. See if there is a Zi with i ~ s such that

with certified computations. Subcase 1.1 Some such Zi exists. Action. Set req(s + 1) = Qe and set D(s + 1) = 1. Subcase 1.2 No such Zi exists. Action. Do nothing; that is, keep req(s + 1) = Re and D(s + 1) = o.

406 19. Structure of Languages Under Parameterized Reducibilities

Case 2. req(s) = Qe for some e. Action. See if there is some Zi for i :::: s such that

with certified computations. Subcase 2.1 Some such Zi exists. Action. Set req(s + 1) = Re+! and D(s + 1) = O. Subcase 2.2 No such Zi exists. Action. Do nothing; that is, keep req(s + 1) = Qe and D(s + 1) = 1. End of Construction. Verification. We argue that all the requirements are met in order and req(s) runs through all the requirements. We do so by induction. So suppose we have met all requirements of higher priority than Re, and at stage So, we set req (so) = Re. Thus, wewouldsetD(so) = O.Now,ateachstages:::: sowearekeepingreq(s) = Reand D(s) = 0 while we cannot see some Zi with i :::: sand r e.s(As E9 Cs; Zi) =f:. Bs(Zi) with certified computations. By the nature of certified computations, if we see some such Zi, then we will know that, in fact,

Furthermore, we will move on to Qe and never set req(s') = Re at any stage s' :::: s. Thus, it suffices to argue that some such stage s with re.s(As E9 Cs; Zi) =J Bs (Zi) via certified computations exists. If we suppose not, then we must have that re(A E9 C) = B. But the key thing to note is that C =* 0; that is, except for a finite piece, Cis 0. Because of this, we can put the nonempty finite piece in a table lookup and then get r~(A E9 0) = B where r' emulates r except on the finite portion where C is not 0. There, r' uses the table lookup instead. But then we have just described a reduction reducing B to A via r', and that is a contradiction. Thus, we must have some stage s with re.s(As E9 Cs; Zi) =J B .• (Zi) via certified computations.

The case req(so) = Qe is essentially the same and is left as an easy exercise for the reader (Exercise 9).

Finally, note that C :::: B since D is computable in quadratic time. All the searches are bounded by s and there are s of them.

We remark that although proving that the polynomial-time degrees of recursive sets are dense only uses a (delayed) diagonalization argument, the T -degrees of recursively enumerable sets are also dense, but this argument is a very interesting infinite injury argument. (See Refs. [402] and [430], Chap. VITI.)

It is also true that the polynomial-time degrees of all sets are dense, this being a theorem of Juichi Shinoda [420]. To establish Shinoda's density theorem, we need to use the so-called "speedup"technique. We will not need this technique for the purposes of the present book, although it is used in many powerful degree constructions. However, for completeness we will briefly treat this technique, and the reader is referred to the exercises of this and the next section for some typical applications.

19.1 Some Tools 407

The technique is best thought of as a miniaturization of the finite extension tech­nique in classical degree arguments. The best arena to demonstrate the technique is the construction of a minimal pair of polynomial-time degrees. We remark that it is easy to prove this theorem using delayed diagonalization (see Exercise 16).

Theorem 19.16 (Ladner [321]) There exists a minimal pair of polynomial-time degrees; that is, there exist recursive languages A and B such that if A, B f/ P andforall C ~~ A, B, C E P.

Proof. We must meet the following requirements: Pe :re(0) =I A. Qe :re(0) =I B. Ne :~e(t) = Ae(B) = f =} f E P.

Here, (~e, Ae)eeN is an enumeration of all pairs of P-time procedures. Now, first, pretend that we were actually using the finite extension method to

build a minimal pair of T -degrees below 0', as in Theorem 19.5. (Thus, the r, A, ~ are now classical Turing procedures.) To meet the Pe at a stage s would be easy. Given As and B.., we would pick a large follower x and compute re(0; x), and then make A(x) =I r(0; x). The Qe are met analogously. To meet the Ne, at some stage t we would ask the following:

3x, a, rea :J At /\ r :J Bt /\ ~e(a; x) =I Ae(r; x».

If the answer is "yes," then we would extend At to be a and extend Bt to be r, causing ~e(A)(x) =I Ae(B)(x) since A extends At+! = a and B extends Bt+l = r. If no such a, r, and x exist, then we can effectively compute ~e(A)(x) by simply computing ~e(fJ; x) on any fJ extending At.

To miniaturize the above to recursive languages, we must bound the searches for a, r, and x. This bounding causes the argument to actually become a finite injury argument, as we will now see.

The overall structure of the argument is to consider the requirements in cycles. Initally, the cycle consists of one requirement, the one of highest priority. Arrange the requirements in some descending priority ordering R(O), R(I), ... (i.e., so that R(i) E {Pj , Qj, Nj D. At any stage s, the cycle will be a finite collection of requirements R = R(ns), ... , R(O), arranged in ascending order of priority. [Thus, R(i + 1) has lower priority than R(i) and R(O) has highest priority for the construction.] For the R-cycle, we will consider the requirements R(ns), then R(ns-l), etc. in order, and only finish the R-cycle when we finish considering R(O). At the next stage, we would begin the R+ = R(ns + 1), R(ns ), •.• , R(O)­cycle. It is crucial to the construction that while we are in the R-cycle we will consider the requirements of the R-cycle one ata time in reverse order of priority and consider no other requirements. Construction. Suppose that we are in the R-cycle. Let Begin(s) be the initial stage where the current R-cycle began.

At stage s, we will have the parameter pes) and be considering a requirement R(j), and have a pending requirement .' R(s) with a pending action act(s). We

408 19. Structure of Languages Under Parameterized Reducibilities

say that R(j) asserts control ofthe construction. It does so until a stage p'(s), as described below. p'(s) is detennined by what type of requirement R(j) is and by the construction.

At the end of stage p'(s), we will pass control of the construction to R(e - 1). We will make a decision to either make R(j) the new pending requirement with a new pending action act(p'(s)), or keep the old pending action and requirement. If it is the case that R(j) = R(O), we actually perform the pending action. Since things only change when s = p(s), we will assume that this is the case. Case 1. R(j) = Pe for some e. [Or R(j) = Qe dually.] If R(j) is already met, replace R(j) by R(j - 1), and the pending action and requirement both remain the same.

Assuming that R(j) is not as yet met, we wish to know the result of

re(0; IP(s»).

We let p' (s) be sufficiently large so that r e (0; 1 pes») halts in fewer than p' (s) steps. (For instance p' (s) = 2Ye (p(s» , where Ye is the running time of r e, would be more than sufficient.)

At stage p'(s), R(j) will become the pending requirement, and declare R(j)'s pending action, act(p'(s)), to be

"Extend ABegin(s) and BBegin(s) via the empty extension to length p(s) except possibly for A and I P(s) which will be assigned as A(I P(s») = 1 - re(0; I P(s»)."

Now replace R(j) by R(j -1). (The reader should notice that if the requirement is of the Pe or Qe fonn, then either it will have been met or will become the pending requirement. This fact is not true of an Ne which may never become the pending requirement. ) Case 2. R(j) = Qe for some e. Again, if R(j) is already met, replace R(j) by R(j - 1), and the pending action and requirement both remain the same.

Assuming that R(j) is not as yet met, choose p'(s) to be sufficiently large so that we can see in p'(s) many steps if there is any extensions G of ABegin(s) and 'f of BBegin(s) of length 2max{cle(P(S)),Ae(P(S))} (the uses of l!..e and A e , respectively), such that for some z with Izl ::; p(s),

[Notice that since the reductions are polynomially time bounded, we can keep R(j) in control until such a stage is found. The stage will be essentially double exponential in the uses of the reductions.] Subcase 2a. No such G, 'f, Z exist. Action. Do nothing: The pending requirement and action remain the same, and we replace R(j) by R(j - 1). Subcase 2b. We find such a triple G, 'f, z. Action. R(j) becomes the pending requirement, and the pending action becomes

"Extend ABegin(s) to G and BBegin(s) to 'f."

At the End of the 'R-Cycle. At stage t, we reach the end of the 'R-cycle, we actually perform the pending action, and we make Begin(t + 1) = t + 1.

19.1 Some Tools 409

End of Construction. It is now routine to verify the construction. If R{j) is the pending requirement at

the end of some R-cycle, then its pending action will be done. Once the requirement is acted for, it is met and, hence, will never again have an effect on a cycle. Therefore, we can see that all the Pe and Q e are met.

Finally, to see that all the Ne are met, first note that if ever we act for Ne at the end ofacycle, we must meet itby creating a disagreement. Assume that f!.(A) = A(B). Go to a stage So where for all s :::: So and j < e, Nj is never the pending requirement at stage s. It follows that Nj for j < e have no further effect on the construction. (This is similar to the proof of Blum's Speedup Theorem [163].) Furthermore, at all stages t > So, if t is the beginning of a new cycle, then Begin(t) = t. But, of course, Begin(s) < pes) for such stages. Since we are assuming that Ne does not become a pending action, we know that for all possible incarnations u and r of At and B t and all z with Izl :5 Begin(t), it is the case that

le(U; z) = A-eCr; z).

But then as Begin(t) -+ 00, and at stage t, we can deduce that le(A) E P since to compute le(A; z) simply go to the first stage q > So and q > z. The value of le(A; z) is then le(U; z), where U is the empty extension of ABegin(q).

We remark that the speedup technique has the advantage over the delayed di­agonalization technique, that it can be used to embed nondistributive lattices into degree structures (Ambros-Spres [20]). It does have the disadvantage that it pro­duces languages that are not elementarily recursive, since there are many nested iterations needed to figure out the final incarnation of the initial segment of a language. Below are some exercises to sharpen the reader's skills in the new tech­niques introduced in this chapter. The reader unfamiliar with the techniques above should attempt to complete these before tackling the more intricate arguments in the next section.

Historical Notes

The basic diagonalization argument is attributed to Cantor [114]. The applications of this method to classical recursion theory was known to workers in the field in the 1930s such as Post, Kleene, Church, Rosser, Turing, and G6del. The basic halting problem is often attributed to Turing [449], although it is essentially present in G6del [245]. The finite extension method is a refinement of the diagonalization technique which grew out of Post's classic article [375] and is apparently due to Post who was terminally ill at the time. The final article [300] was prepared by Kleene. These ideas are now known to be precursors of Cohen's set-theoretic forcing [127]. There are other precursors such as the work of Nerode [362]. The reader is referred to Kunen [315] as a basic text on forcing. Forcing arguments have recently found many applications to structural complexity (see Fortnow [228]). Within a short time after the publication of the ground-breaking Kleene-Post article [300], Post's problem was more or less simultaneously solved by Friedberg in the

410 19. Structure of Languages Under Parameterized Reducibilities

United States and Muchnikin Russia [233, 357]. Both authors were students at the time. Almost immediately, people realized the power and fundamental nature of the priority technique these authors introduced. Later, Shoenfield and Sacks [403] introduced the infinite injury method via intricate combinatorial methods rather like infinite "pinball machines." This model was later suggested by Lerman [329] (and found to be very valuable in certain circumstances). The tree of strategies method has its roots in the seminal articles of Lachlan [317] and Yates [470], where the idea of nested strategies is introduced. The idea has earlier precursors in Frideberg's e-state construction of a Maximal Set (Friedberg [234]). Tree of strategy arguments are now ubiquitous in modern recursion theory and seem to have first been explicitly introduced in Ladner [320]. They were the key to the 0111 so-called "monstrous injury" method of Lachlan [318, 319]. This method was later developed and refined by Harrington and is the mainstay of many modern arguments as the construction "lives" at the level of ::::T. The reader is referred to Soare [431] for more details and for an account of the main lines of development at least until 1987. A recent and very interesting application of the tree of strategy technique in classical complexity theory can be found in [284].

Ladner [321] seems to be the first author to study the structure ofrecursive sets under reducibilities of bounded complexity. That article introduced many of the basic techniques such as delayed diagonalization. (In fact, one can also find there a primitive form of the speedup technique.) The speedup technique at least in its modern incarnation has its roots in (Blum [62]), but its explicit development is due to Ambos-Spies [20]. (See Exercise 17.) We remark that the speedup technique has been used for infinite injury arguments in [126], [161], and [421]. Of course, there had been earlier studies in structural complexity such as those by Savitch [408], Hartmanis and Stearns [264], and Blum [62]. These articles studied complexity classes and established many of the fundamental results concerning these objects. We refer the reader to Odifreddi [365]. We do not study abstract complexity classes here. Baker et al. introduced the controversial subject of oracles in [48]. Oracles have had an uneasy relationship with complexity theorists ever since their intro­duction. The primary use of of oracles comes from the observation that they show us that "techniques that relativise are not sufficient to solve problem Q" where Q for instance, is P = N P. The trouble is that we find it hard to know what the state­ment actually means. This is especially true in view of the recent nonrelativising results such as I P = PSPAC E ofShamir [418]. We refer the reader to [124] and [224] for discussions on the role of relativisation in complexity theory. We also refer the reader to [45] for an excellent introduction to structural complexity.

Exercises 19.1

19.1.1. (Folklore, after Post) Prove that a language L is recursively enumer­able if L is the domain of a partial recursive function if L is finite or the range of a injective partial recursive function.

19.1 Some Tools 411

19.1.2. (Rice [386], Rice's Theorem) An index set 1 is a set such that for all

partial recursive functions ({Je and ({Jj' if ({Je = ({Jj and eEl then j E I. Prove Rice's Theorem: An index set is recursive iff I = f2J or I = :E*.

[Hint: Suppose not. Let I be a nontrivial index set. Without loss of generality, we may assume that I contains a function IPe that is not everywhere undefined and that the everywhere undefined function is a member of 7. Consider the partial recursive function defined via 1/1 (x, y) = IPe (y) if IPx (x) -J, and 1/1 (x, y) t otherwise. Now by the s-m-n theorem, there is a recursive family of partial recursive functions IP.f{X) such that for all y, IPs{x)(Y) = 1/I(x, y). Now, use the fact that I is recursive to get a decision procedure for the halting problem using

the family IPs{x)']

19.1.3. (Folklore) (i) Prove that the index set (see the question above) Tot = {e : ({Je is total} is not recursively enumerable. (ii) Prove that the index set Inf = {e : ({Je has infinite domain} is neither recursively enumerable nor corecursively enumerable. (iii) Prove that Tot and Inf are IT2 complete.

[Hint: (i) and (ii) use diagonalization. For (iii), for Tot, say, let L be a O2

language. Thus, there is a recursive relation such that for all x, x E L iff Vy3tR(x, y, t). Build a recursive family of partial recursive functions fe : e E

N, (via, say, the s-m-n theorem) such that for all x, fx is total iff x E L. To do this, allow fAy) -J, only when some t is found so that R(x, y, t) holds.]

19.1.4. (Folklore) Use the finite extension method to construct languages A and B below 0' such that A =T B yet A "im Band B "im A.

[Hint: To get A =T B, use the following reduction. For x a power of 4, we have x E A iff x E B or 2x E B and for x a power of 25, we have x E B iff x E A or 5x E A. To make A 1:.m B, meet the requirements

Re : IPe is not a reduction with A :::m B via IPe.

We meet Re by using some witness x which is a power of 4 and seeing if IPe(x) -J,. Then, one of x or 2x will be used to diagonalize against IPe since at most one of them can be IPe(x).]

19.1.5. (Kleene and Post, more or less) A language A is called low if A' (= {e : <l>e(A; e) ..j..}) is recursive in 0'. Construct a nonrecursive low language.

[Hint: Use the finite extension method. Build A in stages to meet the requirements below.

Re : A i= We (where We denotes the e-th recursively enumerable set)

at odd stages. At even stages 2e, try to force <I>e(A2e; e) -J, by searching for some -r extending A2e- 1 with <I>e(-r; e) -J,. Keep the construction recursive in 0'.]

19.1.6. (Jockusch and Posner) A language L is called I-generic if for each recursively enumerable set of strings Ve , one of the following pertains: (i) There is some a E Ve which is an initial segment of L. (Here, Ve

412 19. Structure of Languages Under Parameterized Reducibilities

denotes the e-th recursively enumerable set of strings.) (ii) There is some initial segment T of L such that for all Y EVe, T is not an initial segment of y. Use the finite extension method to construct a I-generic language below 0'.

[Hint: Meet the requirements

Re : (3a E Ve)[a < L] V (3r < L)(Vy E VeHr 1: V].

Do so by attending Re at stage e. See if there is a a in Ve with (f extending Le. If not, then use Le to play the role of r.]

19.1.7. (Baker, Gill, and Solovay [48]) Construct a recursive oracle A with NpA ncoNpA i= pA.

[Hint: Construct A and an auxiliary set B where y E B iffVz[lzl = 21yl -+

z E A] iff 3q[lql = 21yl + I /\ q E A]. Then, BEN pa n coN pA. Now, meet the same diagonalization requirements as in the proof that there is an oracle with N pA i= pA. Care must be exercised to make the two iff statements compatible. Note that if Iyl is chosen to be long enough whenever z with Izl = 21yl is queried, we can add z to an incarnation of A we build without adding all the length 21 y I strings.]

19.1.8. (Baker, Gill, and Solovay [48]) Construct an oracle A with N pA = pA.

[Hint: Consider the language

K: {(x, e, 1 n) : some computation of Me (the eth nondeterministic Turing machine) halts in ::: n steps.}

Then, K: is automatically N pA-complete for any A. Now, construct recursively any language A so that A = K:.]

19.1.9. Provide the details of the case that req(so) = Qe in the proof of Ladner's Density Theorem.

19.1.10. (Friedberg, essentially) Use the priority method to construct a low nonrecursive recursively enumerable set.

[Hint: Construct A = U., A., in stages to meet

Pe : A i= We·

Here, We denotes the e-th recursively enumerable set.

Ne : (3°Os)(<I>e.s(As : e) ,j,) => <l>e(A; e) ,j, .]

19.1.11. (Trakhtenbr6t, see Soare [431], VII.2.S) A recursively enumerable language L is called auto reducible if there is a Turing procedure <I> such that for all x

L(x) = <I>(L U {x}; x).

19.1 Some Tools 413

That is, detennining if x is a member of L can be ascertained from L without asking directly "Is x E L?" Examples include complete theories where one asks if "i" E L." The concept is due to Trakhtenbrot. Use a finite injury priority argument to prove that there is a recursively enumerable nonautoreducible language.

[Hint: Build L to meet the requirements

R. : (3x)[4>.(L U {x}; x) =P L(x)].

Use followers. For a follower x, wait until

4>.,. (L., U {x}; x) .J,.= 0 = Ls(x).

If such a stage s occurs, put x into L,,+I and initialize all lower-priority requirements. ]

19.1.12. (Sacks [402]) The following exercise is an example of a finite injury argument of the unbounded type. (i) Let A be a recursively enumerable nonrecursive set. Prove that there exist recursively enumerable disjoint sets Al and A2 with A = Al U A2 with Al Turing incomparable with A2. Indeed, given any nonrecursive recursively enumerable language C, we can ensure that Ai i.T C for i = 1,2. (ii) Deduce the following result of Friedberg [233]: There is no minimal recursively enumerable degree.

(Hint: Let C = UsC •. Build Ai = UA i .• in stages by a priority argument to meet the requirements below:

R i,. : 4>.(Ai) =P C.

At every stage, we must put any number y entering A,,+I - As into exactly one of Al,s+1 - AI .. , or A 2.s+1 - A 2.,f' This causes A = Al U A2 with Al and A2

disjoint. To meet Ri,e, we define the length of agreement function

ei(e, s) = max {x : Vy < x[4>e .. , (Ai,.; y) = C(y)]}.

The idea is that once ei (e, s) > t for any t, we try to preserve ei (e, s') > t for

all stages s' > s. The way we do this preservation is to put numbers entering A.r+1 - A. below the use of the ei (e, s) computations into Ai-l,s+1 and not into Ai,s' Exactly into which set we put y entering As+I - As is determined by priorities. Argue that once Ri,e has priority, it will direct all elements y below the use of ei (e, s) into A I_i,.; hence, if we suppose that wefail to meet Re,i so that ei (e, s) -+ 00, we can deduce that C is recursive, a contradiction.]

19.1.13. (Downey and Welch [185], Ambos-Spies [21]) Modify the con­struction of a minimal pair (Theorem 19.8) to construct a recursively enumerable nonrecursive set A, such that for all recursively enumer­able sets AI and A 2, if Al and A2 are disjoint and AI U A2 = A, then the degrees of Al and A2 fonn a minimal pair.

414 19. Structure of Languages Under Parameterized Reducibilities

19.1.14. (Thickness Lemma, Shoen field) A recursively enumerable set A is called piecewise recursive if A = UeA <e) with A <e) ~ N<e) =def

{(x, e) : e E !":f}, and such that for all e, A <e) is recursive. Let A be a recursively enumerable piecewise recursive set, and let C be a nonrecur­sive recursively enumerable set. Furthermore, suppose that for all e, A <e) is an initial segment of N<e), so that it is either { (0, e), ... , (n(e), e) }, or N<e). Use the infinite injury method to construct a recursively enumer­able subset B of A such that for all e, I A (e) - B(e) I < 00, and C 1. T B. B is called a thick subset of A.

[Hint Meettherequirements Re : IA(e)-B(e)1 < 00 and Ne : CP(B) f C. To meet the Re, you must make sure that almost all of the e-th column of A gets into B. To meet the N., use the Sacks strategy of Exercise 12. The only problem is that infinitely often one can try to preserve a computation CPe,s(B.,.; x) = Cs(x)

only later to have some (z, i) with i < e and (z. i) < u(CPe,s(Bs; x» enter At - A,I' and hence get into B if we are to meet Re. (It is only reasonable by priorities that Ne can control A (i) iff i > e.) The solution to this dilemma is to have two or more versions of Ne guessing whether A (i) is finite or not (i.e .• for each i :::: e), and "not believing" a computation unless it looks correct according to the relevant guess,]

19.1.15. (Downey and lockusch [183]) Use the infinite injury method to construct an m-topped incomplete nonrecursive recursively enumer­able degree; that is, construct a recursively enumerable nonrecursive set A such that for all recursively enumerable sets C -:ST A, C -:Sm A.

[Hint Construct A and an auxiliary recursively enumerable set C. Meet the requirements Pe : If f We. Qe : CPe(A) f C, and

We meet Pe and Qe by Friedberg Strategies. To meet the Ri,j. proceed as follows. Because we are only interested in (i. j) for which CP;(A) = Wj , we will suppose that once f(i. j, s) > x (where f(i. j, s) = max{x : Vy <

X (CPi", (A",; y) = Wj,,,(y»))). for the first time we will define a value j(i. j. x). a large fresh number. and promise that

x E Wj iff j(i, j. x) EA.

Thus, at expansionary stages, if we see x E Wj "" we will put j(i. j. s) into At at some stage t 2: s. This interferes with the meeting of the Qe because we might try to preserve CPe,.,(A s; z) to cause a disagreement CPe,.(A.,; z) f C.,(z), only to later be forced to kill this disagreement by putting some j(i. j. x) into A below u(CPe",(A.I; z» (since (i, j) < e). As usual, since Re is infinitary, this cycle could reoccur infinitely often, causing us to never meet Qe. The solution is for the requirements to guess whether there are infinitely many (i, j) expansionary stages, and if Qe is guessing that Re is infinitary, it should only act when it gets a "believable" computation: one where all the coding markers j(a, x) of higher priority are in their final positions.]

19.2 Results 415

19.1.16. (Ladner [321]) Use the delayed diagonalization method to prove that there exist minimal pairs of polynomial time languages.

[Hint: Meet the requirements

P, : B i= r e (0),

R, : A i= r,(0),

N, : Lle(A) = A,(B) = f => f E P.

One can meet P, (and Re) by followers. Pick a follower x and wait for re(0; x)

to give a value; then make B different from this value. In conjunction with the N" this uses delayed diagonalization, as we will see. The interesting require­ments are the N,. The key idea is that if A is empty, then Ll,(A) = 11.,(0) must be in P. So we can make B tf. P and still meet all the Ne at the expense of making A = 0. The same is true with the roles of A and B reversed. So, the idea is that while we want to meet a Pe , we make A empty long enough that we get to win Pe . Note that once we put x into B, say, if we wait for an exponential number of steps, making both A and B empty extensions, we will be able to switch from making A empty to making B empty and move on to the next Re.J

19.1.17. (Ambos-Spies [20]) Historically, this is Ambos-Spies' first applica­tion of the speedup technique. Use the speedup technique to show that if A is a recursive language not in P, then there is a recursive language B not in P forming a minimal pair with A.

19.2 Results

Now, we turn to the analysis of our parameterized languages. So far, our results have only needed the working definition of parameterized reducibility, Definition 9.3, which was of the form

(x, k) E L iff (x', k') EL'.

Now, we will actually need to carefully differentiate between the technical differ­ences in the various flavors of uniformity of the various reducibilities. Thus, we recall the relevant definitions below.

Definition 19.17 (Uniform Fixed Parameter Reducibility) Let A and B be pa­rameterized problems. We say that A is uniformly fixed parameter-reducible to B if there is an oracle procedure <1>, a constant a, and an arbitrary function f : N f-* N such that (a) the running time of <I> (B; (x, k}) is at most f(k) Ix IU, (b) on input (x, k), <I> only asks oracle questions of B(fCk», where

B(fCk)) = U Bj = {(x, j) : j :s f(k)&(x, j} E B},

j:'OfCk)

(c) <I>(B) = A.

416 19. Structure of Languages Under Parameterized Reducibilities

If A is uniformly fixed parameter reducible to B, we write A ~T B. Where appropriate we may say that A ~T B via f. If the reduction is many: I (an m­reduction), we will write A ~~ B.

Definition 19.18 (Strongly Uniform Reducibility) Let A and B be parameterized problems. We say that A is strongly uniformly jixed-parameter-reducible to B if A ~T B via f where f is recursive. We write A ~T B in this case.

Definition 19.19 (Nonuniform Reducibility) Let A and B be parameterized prob­lems. We say that A is nonuniformly jixed-parameter-reducible to B if there is a constant a, a function f : N 1-+ N, and a collection of procedures {<I>k : kEN} such that <l>k(B(f(k))) = Ak for each kEN, and the running time of <l>k is at most f(k)lxl". Here, we write A ~T B.

We will need the following technical result.

Lemma 19.20 ([168]) (i) Suppose that A ~T B(or A ~~ B) with A and B recursive. Then, there exists a recursively enumerable [i.e., .6.2 with f(x, s + 1) =1=

f(x,s)implyingthatf(x,s) =1= f(x,t)forallt > slfunctionfsuchthatA ~T B (resp. A ~~ B) via f.

(ii) Suppose that A ~T B (or A ~::. B) with A and B recursive. Then there exists a recursively enumerable function f such that A ~T B (resp. A ~::. B) via f.

Proof. We do (i) for ~T' the others being essentially similar and left to the reader (Exercise 2). Suppose that A and B are recursive and A ~T B. Then, there is a procedure <1>, a constant a, and a function g so that for all k

(*) ('Vz) «(z, k) E A iff <I> (B(g(k)); (z, k) = 1 and runs in time ~ g(k)lzl"). We claim that 0' can compute a value that works in place of g(k) in the above;

that is, foreachk, 0' can compute m = m(k) satisfying (*) with m in place of g. Call this (*)'. The reason is that the expression in the scope of the universal quantifier is recursive and hence the whole expression is ~T 0. This uses essentially the same reasoning as the proof of Theroem 19.5. For completeness, we remind the reader that for each pair (n, k), we can enumerate a partial recursive function 1/!(n,k) = ifJh(n,k) whose index hen, k) is given by the s-m-n theorem with dom1/!(n,k)

equal to N if there is somez with (z,k) rt. A but <I>(B(n); (z,k) = 1,0r(z,k) E A and <I>(B(n); (z, k) = 0, or <I>(B(n); (z, k) not running in time nlzl"; and we have 1/!(n,k), the empty function otherwise. Now, 0' can decide if (h(n, k), hen, k») E 0' and, hence, can compute the least n such that dom1/!(n,k) = 0. For such an n, we have that

Ak = <I>(B(n) in running time nlzl". Now, it is clear that we can approximate such an n(k) via a function where

values only increase and, hence, we can take an m to perform the role of g that is recursively enumerable.

Using Lemma 19.20, it is not difficult to prove that the three definitions above define differing reducibilities.

19.2 Results 417

Theorem 19.21 ([168]) (i) FPT(s.'T) c FPT(s.~) evenfor recursive sets.

(ii) F PT(s.·;') c FPT(s.'T).

Proof. (ii) We prove this by a simple diagonalization argument. Let {(<I>e, ({Je) : e E N} denote an enumeration of all pairs consisting of a procedure and a partial recursive function. We shall satisfy the requirements for e EN:

R(e,n) :Either ({Je is not total, or for some k, x, <l>e(0; (x, k) =f. A«(x, k), or <l>e(0; (x, k) does not run in time ({Je(lkl)lxln.

Additionally, we must ensure that A E FPT(s.'T). We devote A(e,n) to meeting R(e.n)' We ensure that at most one element ofrow (e, n) enters A, and if z enters A(e.n), then z is of the form (1m, (e, n)) for some m,

We shall build A in stages. At stage s, we decide the fate of (1", k) for all kEN.

At stage s, the construction runs as follows: For each (e, n) S. s, if R(e.n) is not yet declared satisfied, compute s steps in

the computation of ({Je«(e, n). (Call this ({Je,,, «(e, n).) If ({Je,,, «(e, n) t, do nothing for (e, n) at this stage, keeping m«(e, n), s) = m«(e, n), s - O. If ({Je." «(e, n) -J,.,

declare R(e.n) as satisfied and perform the following diagonalization for (e, n). Run <l>e(0; (1", (e, n))) for ({Je«(e, n)sn many steps. If the procedure does not halt in this many steps, we need do nothing, since the running time is wrong. If <l>e(0; (F, (e, n))) -J,. in ({Je( (e, n) )sn many steps, set

A«(1", (e, n))) = 1 - <l>e(0; (1", (e, n))).

In either case, set m( (e, n), F) = 2({Je( (e, n) )(s + 1). It is clear that the diagonal­ization succeeds, ensuring that A =f. <I> e (0). Note that A E FPT (s.'T) since for any k, (z, k) E A iff z is of the form 11 and (11, k) is put into A at stage t. This can be decided in time m(k, t) . t, and since m(k, t) =f. m(k, t + 1) at most once, we see that A E FPT(s.'T).

(i) Again, we use a simple diagonalization argument. Now, we need a family of reductions {~k : kEN} with ~k(0) computing A k. By the Limit Lemma we need to meet the following requirements:

R(e.n.c) :Either lim" ({J( (e, n), s) fails to exist, or <l>e(0) =f. A(e,n), or it does not run in time ({Je«(e, n)lzlc.

Here, we are working with pairs consisting of a procedure and a binary recursive function. We are denoting by ({J(p) the value of lim" ({J(p, s) if it exists. We shall additionally, and without loss of generality, assume that ({Je is nondecreasing in both variables where defined. (This follows by the proof of Lemma 19.20.) In the construction to follow, a value can be used for (e, n, c).

At stage s, if R(e,n.c) is not as yet declared satisfied and (e, n, c) S. s, find the least unused j S. s, if any, such that j = ({Je." «(e, n, c), t) -J,. for some t S. s. If either ({Je." «(e, n, c), t) t for all t S. s orthereis no unused j, do nothing. If j and, hence, t exist, declare j as used. Now, compute j sn steps in the computation of <l>e(0; (F, (e, n, c))). If this does not halt, do nothing else. If <l>j (0; (l"', (j, n)))

418 19. Structure of Languages Under Parameterized Reducibilities

halts in ~ j SC steps, win by setting

A({l"', (j,n,c))) = 1- <l>j(0; {l"', (j,n,c))).

It is clear that A is recursive. Now D.k is one of the following two reductions: Either D.k = q, which, on input (x, k) says that (x, k) fj. A, or D.k = D. which, on input (x, k) computes s where Rk is satisfied, and then has (y, k) E A iff y is of the form 1"' and 1 - <l>e(k)(0; W, k) = 1 and (y, k) f/. A otherwise. Note that the algorithm runs in constant time, so that A E FPT(~}).

We can immediately see a difference between the arithmetical complexities of the parameterized and classical reducibilities. Classical polynomial time T - and m-reducibilities are both I:2 since, for instance, L ~~ L' iff

3eVx[x E L iff reeL; x) = 1].

In our situation, we have the following.

Theorem 19.22 (Cholak. and Downey [126]) Let A be a computable set. Then forq E {m, T}, (i) {e : We ~~ A} is a I:~ set. (ii) {e : We ~~ A} is a I:2 set. (iii) {e : We computable and We ~Z A} is a I:2 set.

Proof. We prove (ii). The others are left to the reader. (Exercise 5) By the definition, we see that

W ~; A iff 3e'v'k3u'v'z(<I>e(A (u); (z, k) = W({z, k) in time ulzln ).

This is clearly I:2. D. We can improve on this result as follows by showing that the relevant index sets

are as complicated as they can be.

Theorem 19.23 Suppose that A is computable. Let q E {m, T}. Then the index sets (i) {e : We ~~ A} and {e : We =~ A} are I:g complete. (ii) {e : We ~~ A}, {e : We =~ A}, {e : We computable and ~Z A} and {e: We computable and =Z A} are I:2 complete.

Proof (i). We prove that {e : We ~T A} is I:g complete, the remainder of (i) being left to the reader. (Exercise 6.) For ease of notation we shall first prove that F P T (~T) is I:g complete. We will then point out the easy modifications to obtain the result for A computable. Thus let B be a I:g set. By definition, there is a computable relation Q such that for all e

e E B iff3xVy3zQ(x,y,Z, e).

19.2 Results 419

We will build a collection C of sets V. = WI (.) whose indices are given by the s-m-n theorem, such that we meet the requirements

R. : V. ::::T 0 iff e E B.

Actually the fundamental strategy is to try to ensure that V. iT 0, and only f ai I

to achieve this goal if e E B. Now, V. ::::T 0 means that there should be some reduction triple consisting

of a reduction (CI>m), a total computable function ({Jm and a positive integer n, such that for all (w, k) Cl>m (0; (w, k)) = V.( (w, k)), and the running time of the computation is bounded by ({J(k) I w In. Hence, provided that e f/. B, we will meet all the "subrequirements" below.

Re.m.n:If e f/. B then either ({Jm is not total, or for some k, w, Cl>m(0; (w, k) =f:. Ve«(w, k)),or

Cl>m(0; (w, k)) does not run in time:::: ({Jm(k)lwl n.

The reader should compare the requirements above with the R( •. n) of Theorem 19.21. We will suppose that each requirement occurs infinitely often. It is easiest to conceive of the construction as follows. For each potential "witness" x to the outer existential quantifier used to show that e E B, we will have a Control Device

C(x). The idea of the control device is that it tells us when to attempt to meet the subrequirements.

The fundamental plan is to pursue an elaboration of the strategy of Theorem 19.21. For the present situation, our strategy is the following:

We build Ve in stages. At stage s we decide the fate of (l", k) for all kEN. (If k > s then (l", k) f/. Ve by fiat.) The reader should recall that in Theorem 19.21, we devoted the j -th slice of A to meeting R j . Here Ve has the role of A and where j will be some (e, m, n). In the present proof, this assignment will be replaced by a more flexible arrangement where slice(j, s) denotes the current slice devoted to satisfying Rj • Modulo the control device (to be described) (so that we can attack the requitement freely at each stage), at stage s, the construction of V. runs as follows:

For each (e, m, n) :::: s, if Re•m•n is not yet declared satisfied, compute s steps in the computation of ({Jm(slice«(e, m, n), s». (Call this ({Jm.s(slice( (e, m, n), s».) If ({Jm .. , (slice( (e, m, n), s» t do nothing for (e, m, n) at this stage. If ({Jm.s(slice( (e, m, n), s» .J,. declare R e.m.n as satisfied and perform the following diagonalization for (e, m, n). Run Cl>m(0; (1"', slice«(e, m, n), s))) for ({Jm(slice«(e, m, n), s»sn many steps. If this does not halt in this many steps we need do nothing since the running time is wrong. IfCl>m(0; (1"', slice( (e, m, n), s»))

.J,. in ({Jm(slice«(e, i, n), s»sn or fewer steps set

Ve ( (l", slice( (e, i, n), s»)) = 1 - Cl>m (0; (1", slice( (e, m, n), s»)).

The enumeration of B controls the above in the following way. We say that x is s-confirmed (for e) at stage t if

Vy :::: s3z :::: tQ(e, x, y, z).

420 19. Structure of Languages Under Parameterized Reducibilities

We say that C(x) asserts control of Re•m•n at stage t if (i) x is s-confinned at stage t, (ii) x was not s confinned at stage t - 1, and (ii) x < (e,m,n) < s.

We remark that the idea here is that if the x is a least corect witness to e being in B then the inner fh part of the definition of B (namely Vy3zQ(x, y, Z, e) will "appear correct" (i.e. be confinned) infinitely often. The principle idea is that such confinnation will allow us to move the slices we are dynamically assigning to the satisfaction of the requirements into higher slices. If e is really in B then, for all but a finite number of rows, we will see that we will kick all of the witness rows to infinity. On the other hand, if e ¢ B then eventually each slice, for example, slice(j, s), comes to a resting place, and this is where we will get to meet Rj •

More fonnally, we have the following. The Full Construction: Incorporation of x-Control. Now we will modify the construction to incorporate C(x). At each stage t for each x, e ::::: t, we first see if x is s confinned for e for some s ::::: t. If this is the case, we say that C(x) asserts control of various Re•m•n • In particular, for each such (e, m, n), we will reset slice«(e, m, n), t + 1) to be large and fresh. Specifically, for all (m', n') ~ (m, n), setslice( (e, m', n'), t+ 1) = slice( (e, m', n'), t)+t+ 1. Also we initialize Re.m'.n"

End of Construction Now the point of all the above is the following. One can easily establish by

induction that (i) e E B implies that for all but finitely many m, n, Re.m•n is initialized infinitely often and hence slice«(e, m, n), s) -+ 00.

(ii) e ¢ B implies that for all m, n, lim .. slice«(e, m, n), s) exists. To see that (i) holds, if e E B there is some x with Vy3zQ(e, x, y, z). Now

such an x will be confinned for e infinitely often. Hence it will assert control of any Re•m.n for (m, n) > x infintely often, driving slice«(e, m, n), s) to 00. On the other hand, if e ¢ B, then for each x, there is some y such that for all z, it is never the case that Q (e, x, y, z). It follows that x will only be confinned for e until it gets stuck on y. Thereafter, it can no longer initialize any Re.m•n . Since there are only j numbers x < j and only x < (m, n) can initialize Re•m•n , it follows that there is a stage tm.n such that for all t ~ tm•n , R e•m.n is not initialized at stage t. Since slice«(e, m, n), t) is only reset when Re•m •n is initialized, it follows that slice«(e, m, n), t) comes to a limit.

Finally we see that we meet all the requirements. If e ¢ B then a simple induction shows that, for j = (e, m, n), we will meet Rj in row slice(j) = lim .. slice(j, s).

If e E B then we argue as follows. Let x be least such that C (x) asserts control of some Re•m•n infinitely often. Let So be a stage such that for all t > so, for all y < x, C(y) does not assert control of R e•m•n at stage t. Thus for all (y, d) < x for all slice«(e, y, d), so) = slice«(e, y, d)). For all j we know that Rj acts on row Z at most once. Let s] > So be a stage such that for all (m', n') < x, and for all t > s], Re.m'.n' does not act at stage t. Then we know that for all j < slice(x - 1), and all p,

Ve .. , «(p, j)) = Ve«(p, j).

19.2 Results 421

(Indeed we remark that for all p with Ipi > St, Veep) = 0.) We now give a F PT(~T) decision procedure for determining if u = (v, k) EVe'

If k < slice(x - 1) go to stage St and see if u EVe. If k 2: slice(x - 1) + 1 go to the stage s = s(k) > St where C(x) has asserted control of e k times. Then for all t > s, for all (m, n) 2: x, slice( (e, m, n), t) > k. Hence for all t > s and for all (e, m, n), slice«(e, m, n), t) =1= k. It follows that u E V. iff u EVe, ... Notice that the computation of the stage s(k) depends on k alone, and hence the procedure is independent of v. This is why the procedure is F PT in constant time by the slice. This concludes our proof that F P T (~T) is ~~ complete.

To modify the construction for the more general case proving that {e : We ~T } is also ~~ complete, the control device is the same but we must modify the diagonalization. Basically, one replaces the oracle 0 by A (rp .. ,.(slice«e,m,n), .. ))). Since A is computable, the construction either make Ve in FPT(~T)' or produce a language Ve iT A.

(ii) The other result we will prove is that {e : We ~T A} is ~~ complete. The remaining completeness results are totally analogous and are left to the reader (Exercise 7). Again we take A = 0. This proof is considerably more complex than the previous one and we need to take some care. Actually, this will need the infinite injury method, and uses a priority tree. We use the following representation theorem for ~~ sets:

Lemma 19.24 (Yates [471]) The index set Comp = {e : 0' ~T We} o/complete computably enumerable sets, is ~~ complete

For a proof of Lemma 19.24 see Soare [431], XII §2. In the construction to follow, we will need to approximate Comp as our control device. Thus, we will need to discover which We are T -complete. To do this we will need to guess the index of a procedure <l>i with <l>i(We) = K, with K = 0'. To this end, let

l(e, i, s) = max {x : Vy < X(<I>i(We,s; y) = K .. (y»}.

Now l(e, i, s) can exhibit the following behaviour: We can have lim suP .. l(e, i, s) fr 00, or we can have lim suP .• lee, i, s) -+ 00. The latter can either mean that lim infl(e, i, s) -+ 00 (so that <l>i(We) = K), or that for some x, lim inf l (e, i, s) = x, and hence <I> i (We; x) t . Here we will use the so-called "hat" convention: We assume that if for a computably enumerable set M and procedure <l>i if <l>i, .. (M .. ; z) ,t., but z' ~ u(<I>i,s(M .. ; z» enters M .. +t - Ms, then <l>i,s(Ms+1; z) t . The hat convention ensures that if <l>i(M; z) t then 300S(<I>i ... (Ms; z") t) for all z" 2: z. The trick is due to Lachlan. (See Soare [431], Ch. VIII.) On the priority tree we will need to guess if lee, i, s) -+ 00.

This will be measured at nodes t', which we refer to as top nodes. Such t' nodes have outcomes (i, 00) and (i, f) as in the minimal pair theorem, Theorem 19.8. The trouble is that we also need to differentiate between infinite and finite liminf behaviour. This is very difficult to achieve at a single node, and in fact we will decompose the behaviour into infinitely many guesses for the liminf below the (i, (0) outcome of the t' node. At such a node trying to figure out such behaviour,

422 19. Structure of Languages Under Parameterized Reducibilities

(0,00) (O,t)

A(l,~ (1,00)

/\ (I,t)

/\ (O,l,u)

/\ (0,1,00)

/\ (2,00)

/\ (2,t)

/\ (I,O,U) (I,O,t) (2,00) (2,t) 0,00) (I,t) (O,2,u) (0,2,00) (2,O,u) (2,0,00) (2,O,u) (2,0,00) (3,00) (3,t)

FIGURE 19.3. The Priority Tree for Theorem 19.23

which we refer to as a (j node, we will be considering e, i, x. We will have two outcomes (i, x, (0) and (i, x, u), with (i, x, u) to the left of of (i, x, (0). Thus we have the priority tree given in Figure 19.3.

The outcome (i, x, (0) is meant to represent the fact that 3s'v't ~ s(l(e, i, s) > x), whereas the outcome (i, x, u) represents the fact that infinitely often (i, x, (0) looks correct, and yet later l(e, i, s) ~ x. (The "u" means "unbounded" here.) Thus this is the outcome that limsupl(e, i, s) ~ 00, and yet (if x is least), liminf l(e, i, s) = x.

In the same way we used C to control the strategies above, here we will use the Priority Tree PT to control our strategies. Note that we will build one version of PT for each e below. (The formal definition can be found in Definition 19.25, following some further motivational comments.)

Before we explain exactly how this control is achieved, let us tum to the actual requirements we will meet. For each e we will again build an auxiliary computable set Ve to try to meet the requirements below.

Re:Ve E FPT(~':)iffe E Compo

Since the Ve are all built separately, we will drop the "e" when the context is clear. Actually, as with the previous construction we will meet the following requirements.

R :Either e E Comp and V E F PT via some witness i (Here, of course, by witness i we mean that <1>; (We) = K.) or'v'm, nR{e.m.n), where

19.2 Results 423

R(e,m,n) :Either there exists an m', n' such that limsifJm«e, m', n'}, s) =def ifJm(e, m', n') fails to exist, or there exist p, w, with <l>m(0, (p, w}) =1= V«(p, w}, or it does not run in time :s ifJm (w) Ipln,

Notice that in the above, we have made the replacement of the unary partial computable function ifJmO by a binary partial computable function ifJm(', ,) since in the uniform case we will need to guess the use and constant which will filter through the limit lemma, In fact we will assume that such ifJm are monotone in both variables, Again, we are using the requirements of Theorem 19,21 (ii) in modified form, We attempt to meet the R(e,m,n) as we did there, but again modulo the external control this time driven by the Priority Tree, The basic strategy used to meet the R(e,m,n) is the following.

At each stage s, if R (e ,m,n) is not as yet declared satisfied and (e, m, n) :s s find the least unused p :s s ,if any, such that p = ifJm,s «(e, m, n), t) ,j,. for some t :s s. If either ifJm,s «(e, m, n) , t) t for all t :s s or there is no unused p do nothing, If p, and hence t, exist declare p as used. Now compute psn steps in the computation of <l>m (0; (l", (e, m, n}}). If this computation does not halt, do nothing else. If <l>m(0; (1", (e, m, n}}) halts in:S psn steps, win by setting

Ve«(1"', (e, m, n))) = 1 - <l>m(0; (1"', (e, m, n}}),

Now again we will need to modify the above so that a version Ra of R(e,m,n) and guessing CJ will work on a slice slice(CJ, s) instead of a fixed slice (e, m, n).

Again we will need to argue that if there is no witness i to e E Camp and hence e rt Camp, then for all CJ on T P, limsslice(CJ, s) = slice(CJ) exists. Additionally, if e rt Camp, we will need to argue that for all (e, m, n) there is some CJ C T P devoted to solving R(e,m.n). This is done as follows.

Definition 19.25 (The Priority Tree and Attachments) Step 1. We formally define the priority tree and the attachments as follows. If a is on PT then a is of the form v~(i, x, 00), v~(i, x, u), A, v~(i, f), or v~(i, 00). If a is of the form A, v~(i, f), or v~(i, x, u) we say that a is a r node. If a is of the form v~(i, x, 00) or v~(i, 00), we say that a is a a node. Now we construct PT by induction on the length of nodes. Case 1. If the node is a T node then it will form the "top" of an i tree. Let i (r) be the least i not yet assigned to any r' C r and assign i (r) to r. (Hence i (r) = i-I in all cases except r = A..) The outcomes oir will be (i (r), 00) and (i (r), f) with (i(T),oo) <L (i(r), f). Put r~O on PT for the outcomes 0 of r.

Case 2. If the node is a a node so that it will be devoted to some x for some i, find the longest r node C; CJ and define i (CJ) = i (r) and r (CJ) = r. (r is called CJ' s top. Note that i (CJ) = i in all cases above.) It will be the case that r -00 C; CJ. For all y with T~oo C; y C CJ will have y'~(i, x', 00) C; CJ. Let

x(CJ) = max{O, x' + 1 : r~oo C; y~(i, x', 00) C; a}.

424 19. Structure of Languages Under Parameterized Reducibilities

(In all the specifying cases above it will be the case that x = max{O, x + 1 }.) Let a have outcomes (i, x, u) <L (i, x, 00). Put a~O on PT for each outcome 0 of a. Step 2. Now assign by induction on (e, m, n) versions of R(e.m,n) to nodes of the form y = a~(i, x, u) and y = r~f on PT, Do this in the obvious way. For each such node y find the least (e, m, n) not attached to any node J-L C y, and attach R(e.m.n) to y.

As in Theorem 19.8 we will need the notion of an a-stage.

Definition 19.26 (a) We define the notions a-stage, miCa, s), and a-expansionary by induction on lal. (i) Every stage s is a A-stage. (ii) Case 1. Suppose that s is a ,8-stage with ,8 a top node devoted to solving the problem of whether £(,8, s) = £(i (,8), s) -+ 00. Define

m£(,8, s) = max{O, £(,8, t) : t is a ,8-stage < s}.

We say thats is ,8-expansionary if £(,8, s) > m£(,8, s) and declare s to be a poo­stage. If £(,8, s) :s m£(,8, s), declare that s is a pf-stage. Case 2. Suppose that,8 is a a node devoted to (i, x). If there has been a previous,8-stage t let Sf denote the largest ,8-stage :s s. If £(r(,8), Sf) > x and there has been a stage m with Sf < m < s with £(r(,8), m) :s x, declare s to be a p(i, x, u)-stage. In any other case, declare s to be a p(i, x, 00 )-stage. (b) As in Theorem 19.8, we define T Ps , the apparent true path at stage s, to be the unique a of length s with s an a-stage.

Definition 19.27 Suppose that R;r is a version of R(e.m,n) attached toa. We say that Ra requires attention at stage s if s is an a-stage, Ra is not yet declared satisfied, and there is some least unused p :s s such that p = CPm.s(slice(a, s), t) + for some t :s s.

The Construction. Having gone to all the effort above, the construction actually becomes rather easy. Stage O. Define slice(a, s) = a for all a on PT with Rq attached. Stage s + 1. Compute T Ps+l. For each y with y iL T Ps+l, initialize y and in particular reset slice(y, s) to be new and large. (> s). For each a S; T Ps+1,

if Ra requires attention via p, declare p as used. Now compute psn steps in the computation of <Pm (0; (1", slice(a, s»)). If this does not halt do nothing, do nothing else. If <Pm (0; (1", slice(a, s»)) halts in:S psn steps, win by setting

V(W, slice(a, s»)) = 1 - <Pm (0; (1"', slice(a, s»)),

and declare Ra as satisfied. End of Construction.

19.2 Results 425

The verification is straightforward modulo all the definitions etc. Let T P be the true path. Notice that the construction ensures that Ra only can receive attention at a-stages. (See Definition 19.27.) The initialization of y right of T p., means that the following is easily seen by induction: (i) If T P <L Y then slice(y, s) ~ 00.

(ii) If y <L T P and y rt. T P then y is visited only finitely often, and hence there is some stage So beyond which Ra will never receive attention. (iii) If T P f:-L y then limsslice(y, s) = slice(y) exists.

Now if e E Camp then we know that there is some (least) i such that i is the witness to the fact that e E Camp and hence lim inf., lee, i, s) ~ 00. Let r be the top node on T P with i(r) = i. Clearly there are infinitely many r~(i, oo)-stages. Now for each x, we see that (i, x, (0) S; T P. We show that V E F PT(s'T) as follows. Let So be a stage beyond which we are never left of r~(i, (0). Since we ensure that at most finitely many elements ever get into any slice, we can by fiat write in all the slices associated with nodes left or above r~U, (0). We consider slices Vy for y beyond such slices. Let x > y and s :::: So. We know that for all nodes y with y f:-L r~(i, 0, oo)~(i, 1, oor ... ~(i, x, (0), either y is above or left of r and so we can suppose that all elements of slice(y, s) to ever enter V have already done so by stage So, or we know that slice(y, s) > x > y.

The procedure to decide if (v, y) E V is that (v, y) E V iff (v, y) E V by stage tt where t = t (y) is a constant computed by a 0' -oracle. We compute t > So by the notion of T p.,. Each time we play r~(i, 0, oo)~(i, 1, oo)~ ... ~(i, x, (0) immediately after playing r~(i, 0, oo)~(i, 1, oo)~ .. '--(i - 1, x, ooru, x, (0), or simply play r~(i, 0, oor(i, 1, oo)~ ... ~(i, Z, u) for any u S x, we reset t(y, s) to be the current stage number. Because of the choice of i we know that lims (t (x), s)

exists and hence we have t (y) ST 0'. Finally note that the choice of t means that the construction is respected, in the sense that we have enough time to emulate that stage of the construction, and hence the answers must be in agreement with the construction.

Finally if e if. Camp then by the way we attach R(e.m.n) to a on T P (in order) since there are infinitely many rei, (0) or infinitely many y~(i, x, u) on the T P each R(e.m,n) has a version on the T P that must be met, since its slice(a, s) must come to a limit.

To complete the proof note that the one can replace the oracle by A and the diagonalization similarly to get the relativized version, as in the Sr case. D

The structure of (Rec, S), the recursive languages under a parameterized re­duction S is very interesting and fairly open at this stage. Some further results can be found in Exercises 19 and in Refs. [118], [126], and [168]. Much work remains. We give only one further result: We look at whether the analog of Ladner's Density Theorem (Theorem 19.14) holds. We remark that at present we do not know if the result holds for any of the uniform or nonuniform reducibilities. We can solve this problem for the most commonly met reducibility: strong uniform reducibility.

Theorem 19.28 ([168]) Let q E {m, T}./f A and B are recursive with A <~ B, then there exists a set C with A <~ A EB C <~ B.

426 19. Structure of Languages Under Parameterized Reducibilities

Proof. We begin by briefly recalling the construction of Theorem 19.14. Recall that this worked as follows. There were given recursive sets A < B (working with ~!:., say). By the Slow Enumeration Theorem, we assumed that A and B were given as the range of p-time functions. We built C to satisfy the requirements

R~: re(A $ C) =1= B,

R~e+l : re(A) =1= C,

additionally ensuring that C ~!:. B. For the sake of the Rj, we defined a polynomial­time relation D(n). Then, we declared that x E C iff D(lxi) = 0 and x E B, making C ~!:. B.

As the reader will recall, the key idea used to meet the Rj was to meet them in order by "delayed" diagonalization. So we began with Rb. We set at each stage s, D(s) = 1 until a stage t was found where

via certified computations. Then, at stage t, we saw that we had diagonalized Rb at Zn, this being found by looking back for a certified disagreement. The idea was then to moved to R~ and then to R~, etc. For R~, we set D(t + 1) = 0, causing C to look like B locally. So we kept D(u) for u > t equal to zero until a stage v is found with some m ~ v and

ro.v(Av; Zm) =1= Cv(Zm)

via certified computations. We then move to R~, setting R(v + 1) to be 1 again. Thus, the set C so constructed looked like B with "holes" in it.

Keeping the above ideas in mind, we tum to the result at hand. Now, we are given A < B with ~ either ~T or ~~. Again, we must construct C, now to meet the following requirements

R2(e.n) :Either f/Je is not total, or (3k)(Bk =1= <l>e(A $ C('Pe(k)))) or (3x, k)(<I>e(A $ C('P,(k)); (x, k» does not run in time f/Je(k) Ix In.

R2(e.n+l) :Either f/Je is not total, or (3k)(Ck =1= <l>e(A ('Pe(k)))) or (3x, k)(<I>e(A('P,(k)); (x, k) does not run in time f/Je(k) Ix In).

To aid the discussion, we will use several conventions. First, if f/Je.s(k) ..1-, then the computation <l>e(E('Pe(k»; (x, k» cannot call any y ofthe form (k', z) for k' > f/Je(k). Also, since we get a win for free if f/Je .. ,(k) ..I- and the running time of <l>e(E('P,(k)); (x, k) exceeds f/Je(k)lxl n, we shall assume that in the above, the third option does not pertain to Rj and concentrate on the first two. This is because if the running time exceeds the bounds during the construction, we can cancel the relevant requirement. The argument to follow is a finite injury priority one with the Ladner strategy embedded.

19.2 Results 427

Without loss of generality, we can take rpe to be strictly increasing. Again, there will be long intervals with C«(x, k) equal to 0 and long intervals where it looks like B, for "many" k. We have problems, since, for instance, we cannot decide if rpe is total. We first focus on the satisfaction of a single Ro = R2(e,n)' We then describe the basic module for an odd-type requirement, and finally describe the coherence mechanism whereby we combine strategies. The Basic Ro-Module

To meet Ro above, we perform the following cycle. We have a parameter k(O, s) that is nondecreasing in s and such that lims k(O, s) = k(O) exists. This is meant to be the number of "rows" devoted to Ro. It remains constant until we change it. 1. (Initialization) Pick k(O, 0) = 1. 2. Wait until a stage s occurs with one of the following holding: 2(a). (Win) "Looking back," we see a disagreement; that is, as with the Ladner argument, we see an n < s with Zn E {(x, j) : j < k(O, s)}.

<l>e .. ,(A $ c(rpe(k(O.s)-I»; Zn) =I B(zn)

via certified computations, or 2(b). Not (2a) and rpe.s(k(O, s» ,1.-. Comment. If s does not occur, then rpe(k(O, s)) t and, hence, rpe is not total. In this case, we call k(O, s) a witness to the nontotality of rpe.

If2(a) pertains, we declare Ro to be satisfied (forever) and end its effect (forever). If 2(b) pertains, then we perform the following action. 3. Ro asserts control of C(rpe(k(O.s»); that is, Ro asks that for all t 2: s, until 2(b) pertains, we promise to set C(rpe(k(O.s»)(y) = 0 for all y with Iyl = t and y E

(:E*)(~rpe(k(O.s))). This can be achieved via a restraint r(n, k).

4. Reset k(O, s + 1) = k(O, s) + 1 and go to 2. The Outcomes of the Basic Ro Module

We claim that 2(b) cannot occur infinitely often and, hence,lims k(O, s) = k(O) exists. Note that we have only reset k(O, s) if 2(b) pertains in step 3. So suppose k(O, s) ~ 00 and hence rpe(k(O, s)) ~ 00. Then, for each q and almost all y, we have C«(q, y)) = o.

We write A =* B to denote that the symmetric difference of A and B is finite. So Cq =* 0 for all q. Furthermore, for all q, we can compute a stage h(q) where

[Vt > h(q)](Cq«(y, q)) = 0 for all y with Iyl > h(q»,

where h(q) is the stage where Ro asserts control of row q. Finally, we know that for all k,

<l>e«A $ C)(rpe(k))) = Bk.

This allows us to get a reduction .t.(A) = B. For each input (y, k), .t. simply computes B«(y, k) for all y with Iyl :s h(k), and C«(z, k') for all k', z with k' :s rpe(k) and Izl :s h(k). Then, .t. simulates <l>e(A(rpe(k»; (y, k) if Iyl > h(k) with the exception that, if <l>e calls some (r, k') with Irl :S h(k) [and necessarily k' :S rpeCk)], then.t. uses the table of values for C to provide the answer.

428 19. Structure of Languages Under Parameterized Reducibilities

Note that the computations of ~(A; (x, k) and <l>e(C; (x, k) must agree and, hence, ~(A) = B, a contradiction. Thus, 2(b) can pertain only finitely often. It follows that there are two outcomes. Outcome (0, f): 2(a) occurs for some t. Then, we win Ro with finite effect. (Commeut: Once Ro is met in this way, say at stage t, then we are completely free to do what we like with all y for which I y I > t without injuring Ro.) Outcome (0, 00): 2(a) does not occur. Then, CfJe is not total. Note that the effect of Ro is in this case infinite and for some k = lims k(O, s) - 1, we will have

C(ip,(k» =* 0

and, furthermore, there is a reduction ~o with time bound CfJe(k)lxl n for which

Note that for the basic module, ~o is simply <l>e. The Basic Module for RI

This is essentially the same as for Ro except that for RI we wish to set C«(x, k) = B«(x, k). Herein is the basic conflict: An even-indexed require­ment R; asks that lots of rows look like 0 and an odd-indexed Rj asks for them to look like B. Combining Strategies

We cannot perform a delayed diagonalization as in the proof of Ladner's theo­rem, since we cannot know if CfJe(k) is defined. The combination of strategies needs the priority method. Let us consider a module for RI that works in the outcomes of Ro. We cannot know if this outcome is (0, f) or (0, 00). Instead we have a strat­egy based on a guess as to Ro's behavior. Basically Ro always believes that k(O, s) is k(O); that is, that the current value is the final one. Let e = e(O), n = n(O),

f = e(l), and m = n(l). Although RI believes that CfJe(k(O, 0» t, RI acts as if Ro is not there. So, if

k(O, 0) = k(O) and CfJe(k(O, 0» t, then we win RI for the same reasons as we did for Ro. On the other hand, if CfJe(O) .j, for some least stage s, then Ro will assert control of C(ip,(k(O,O))). For the sake of RI, we have probably been setting ceO, x) = B (0, x) for all x with Ix I < s. Since Ro has higher priority than R I, R I must release its control of Co [and indeed of Cj for j :oS CfJe(k(O»] until a stage, if any, occurs where 2(a) pertains to Ro so that Ro is satisfied and releases control forever (or it becomes inactive because of a time bound being exceeded). Note that if 2(a) pertains at t, then RJ is free to reassert control of Co for all y of the form (y,O) with Iy I > t. Also, in this case, as RJ is the requirement of highest overall priority remaining, its control cannot be violated and hence it will be met.

On the other hand, while Ro can hope that 2(a) will pertain to RI, Ro may have outcome (0, 00) and Ro will never release control of Co. The key idea at this point is that we begin anew with a version of R J , believing that k(O, s + 1) = k(O); that is, Ro will never again act.

This version of RJ can only work with Cq for q > CfJe(k(O, s» = CfJe(k(O, 0». Some care is needed since potentially we need all of B to meet RI .

19.2 Results 429

An elegant solution to this difficulty is to shift B into C above qJe(k(O, s». Thus, R I will ask that

C«x, q) = B«x, q - qJe(k(O, s» - 1)

for q > qJe(k(O, s».1t does so until either k(O, t) is reset again, or 2(a) pertains, or the time bounds are exceeded. In the latter cases, it reverts to the (0, f)-strategy. In the first case, it begins anew on q > qJe(k(O, t». Since this restart process only occurs finitely often, it follows that we eventually get afinal version of RI whose actions will not be disturbed.

Thus, there is a final version of RI that is met as follows. As lim .. k(O, s) = k(O) exists, there is a value r and a stage So so that for q :::: rand s. > so, RI is not initialized at stage s and can assert control on Cq if it so desires. If Ro has outcome (0, f), then r = 0, otherwise r = qJe(k(O) - 1) + 1. So, we know that if RI fails, then for all i, there is a stage h(j) (computable from the parameters r and so) where for y with Iyl > h(j),

C«y, r + j) = B«y, i)

and <l>f(A; (y, r + i) = C(y, r + i)·

Thus, if RI fails again, we can prove there is a reduction Ll(A) = B with running time O(izlm) and computable constants. This is a contradiction.

The outcomes for RI are thus (1,00) and (1, f). In the former case, we know that for a finite number of rows i and for almost all y, C «y, i) = B «y, i). But we also know that for such rows, there is a reduction Ll f such that

Llf(A; (y, i) = C(y, i) in time O(iYlm) and computable constants.

We continue in the obvious way with the inductive strategies. Consider, for ex­ample, R2. It is confronted with, at worst, a finite number of rows permanently controlled by Ro and a finite number by RI. However, in each case we know that there is a reduction from a computable number of rows of A to these rows, and hence a reduction

\112(A; (y, i) = C(y, i»

for all i cofinally under the control of either Ro or R I. Therefore, to argue that R2 is met, we get to use \112 to help construct a reduction from A to B; that is, for Ri ,

let e = e(i) and n = n(i). Then, inductively, we have a reduction and constants p(2), m(2), and r(2) with

\112(A m(2); (x, i) = C«x, i)

for all i ~ p(q) running in time m(2)lxl r (2). Futhermore, we have a stage S2 such that for all k < 3, Rk ceases further activity.

Thereafter, R2 is free to assert control over any row q of C for q > p(2). If we suppose that R2 fails, then for each such q, R2 will eventually assert control of Cq

430 19. Structure of Languages Under Parameterized Reducibilities

at some stage h(q) to make C«(x, q)) = 0 for all x with Ixl > h2(q) and we have <1>e(2)(C) = B.

Now, to get a reduction /). from A to B, we argue as for Ro except that now if <1>e(2) makes an oracle question of (y, j) for j .:s p(2), we use 'l12 to answer this question. Thus, we get a reduction /).2 that runs in time O(lxl r (2)+n(2»), with computable constants and correct use. Thus, again, B .:s A, a contradiction.

To complete the proof, one generalizes the above directly into the relevant Depth n strategies, which are given in the obvious way inductively.

To conclude this chapter, we will briefly mention some oracle results. Although there has always been much debate about the value of oracle results, particularly in view ofthe I P = PSP AC E nonrelativizing result ofShamir [418], nevertheless, it is felt that oracle results do give some insight into the difficulty and independence of various open questions.

Since there is an oracle A such that pA = N pA, there is an oracle with W[p]A = F pTA. We offer two additional oracle results. Before stating these results, we remark that we do not know of an oracle relative to which the W -hierarchy is infinite.

Theorem 19.29 ([168]) There exist oracles A and B such that (i) F PTA =1= W[I]A, (ii) pB =1= N pB yet W[p]B = F PTB.

The Oracle Model. Before we tum to the proof we should mention that it is not totally clear exactly what is wanted from an oracle in the parameterized setting. The natural thing is to only allow parameterized queries during the course of an oracle computation. This is the view suggested by looking at .:s~ for q E {s, u, n},

and is the one we adopt. Similarly, the classes we study are defined by circuits and, again, it seems reasonable to adopt the Wilson [463] view of using oracle gates again modified for the parameterized setting; that is, a parameterized oracle gate with parameter k and oracle 0 will output 1 on input a = Xl ... xn (Xi E {a, I}) with the inputs read left to right, iff a E 0 and a has weight k.

Proof of Theorem 19.29. (i) (For F PT(.:s~) and for other reducibilities, see Exercise 10.) This is a minor variation of the Baker-Gill-Solovay construction. We use the set C defined as follows:

(Z, k) E C iff (3y(lyi) = Ixl and YEA and y has weight k).

Now, C is in W[I]A via the circuit such that, on input (z, k), the circuit to accept (z, k) consists of a single oracle gate with inputs ZJ, .•. , Zlzl ordered left to right. Then, (z, k) E C iff there is a weight k work accepted by the gate (i.e., in A). It is routine then to build A to meet

Re,n :Either ({Je is not total, or there is an (x, k) such that <1>e(A; (x, k)) =1= C«(x, k)), or the running time exceeds ({Je(k) Ixln.

19.2 Results 431

To do this, we assign R e," some row k = k(n) with m k > Q(m"). Then, we meet Re," as follows. Wait until ((Je(k) ..1-. Then, find an x of sufficient length so as not to injure other requirements, and so that Ixlk exceeds ((Je(k)lxl". We can then diagonalize via (x, k) in the standard way, using the string not addressed in the A computation of length Ix I (if it is the case that <I> e is outputting 0), or doing nothing as the case may be but then restraining the result so as not to be disturbed by future actions. The result is an easy finite extension argument.

(ii) Again, we do this for ~~ and observe that the obvious modifications work for the other reducibilities. Let Qe denote the e-th P-time relation. Define K F via

{(x, e, 0"), k) E KF for some y with Iyl = lxi, y has weight k

and Q~ (y) holds in n steps.

In view of the direct relationship between circuits and relations, it is clear that K F

is W[p]F -complete. Now, let f be any recursive function from N to N. Suppose we build a recursive

set B such that for each k and all x with Ixl ~ f(k), we have B«x, k) which can be computed in g(k) many steps, and for all y with Iyl > f(k), we have B«y, k)) = B({y, k)). We claim that B =:;. F, so that W[p]F = W[p]B. To see this for the k-th row for the reduction from B to F, say, we first compute g(k) and f(k). As Band F are recursive, we can write the corresponding initial segments in a table. Otherwise, (x, k) E B iff (x, k) E F, and hence B =:;. F.

Now take any F with W[p]F = F; that is, define F via

({x, e, on), k) E F iff for some y with Iyl = lxi, y has weightk and Qe(Y) holds in n steps.

Now, it will suffice to define f, g, and B as above and ensure that pB =f:. N pB.

We do this as follows. We must meet the requirements

where C E N pB and r k denotes the k-th P-time procedure with use qk, say. We meet Rk via row k + 1. We define C so that

(x, k) E C iff (3y)[lyl = Ixl and (y, k) E C], and hence C EN pB.

At stage k, we will have defined f(i) and g(i) for i ~ k and a restraint r(k). Choose x so that 21xl exceeds qk«X, k), and Ix I > r(k). Now, compute r k (B:; (x, k + 1)), where B: is the result of setting B equal to F on all (x, j) for j ~ k and Ixl > f(j), and setting B«x, k) equal to what we have decided at stages ~ k for y with Iyl ~ f(j). (So, basically we have decided at stage k the precise contents of B on (x, j) for j < k.) Set B:({z, r) = 0 for r 2:: k + 1.

Now, ifrk(B:; (x, k+ 1) = I, define (x, k+ 1) = o for allz with Izl ~ r(k+ 1) withr(k+ 1) 2:: r(k) and also exceeding all uses seen so far. (This means that future actions will not affect these protected computations.) If rk(B:; (x, k + 1) = 0, then for some y with Iyl = lxi, we have that (y, k + 1) is not queried during

432 19. Structure of Languages Under Parameterized Reducibilities

the computation of rk(B:; (x, k + 1) computation because 21xl exceeds qk(ixl). Now, put (y, k + 1) into B and otherwise set B«z, k + 1» = 0 for all z with Izl ::; r(k + O. Set f(k + 1) = r(k + 1) and define g(k + 1) appropriately.

This ensures that rk(B) =1= C via the witness (x, k + 1) since inductively all previous restraints are maintained, and hence all previous disagreements are also preserved. Thus, N pB =1= pB and yet F PTB (::;~) = W[p]B.

Corollary 19.30 ([168]) There exist recursive oracles Band F with W[p]B = W[p]F = BPTB = BPT F, B =~ F, N pF = pF, yet pB =1= N pF.

Proof. Take Band F as in the proof of Theorem 19.29(ii). We claim that N pB = pB. This will then give the desired result. Define D via

(x, e, Op,(lxD) E D iff some computation of !l>e(F; x) accepts in n steps.

Then, as usual, Dis N pF complete. We claim that D E pF. To see this simply note that

(x, e, Op,(lxD) ED iff (3j)[j ::; Ixl and «x, e, OPh(,)(lx D) , j) E F],

where Qh(e) is the relation representing !l>e. Hence, pD = N pD.

The reader should note that Theorem 19.29(i) actually separates F PTA from MONOTONE W[I]A, which is not true for A = 0. Actually, F PTA =1=

MONOTONE W[I]A, can be shown to hold with probability 1 for a random oracle A (Exercise 12), so this gives yet another example of the failure of the random oracle hypothesis.

Historical Notes

Investigations into the structure of recursive languages under parameterized re­ducibilities began with Downey and Fellows [168], although the definitions of this section can be found in Refs. [166] and [167]. Other investigations are found in Ref. [119] and Coles, Downey, and Sorbi [118], where some undecidability results are proven for the theories of the structures. These results are beyond the scope of the chapter and need complex infinitary applications ofthe speedup technique. (See Exercise 9.) Theorem 19.29(i) is independently due to Abrahamson [2]. Theorem 19.23 and a number of results in the exercises are new.

Exercises 19.2

19.2.1. (Downey and Fellows [168]) Prove the following: (i) Suppose A and B are recursive sets with A ::;T B via a recursive collection {!l>g(k) : kEN} of reductions all running in time O(lzla). Then A ::;} B. (ii) Furthermore, ifthe running time is recursively bounded (and hence can be taken to be g(k)lzl a ), then A ::;r B.

19.2 Results 433

19.2.2. Prove (ii) of Lemma 19.20; that is, prove that if A ~~ B (or A ~::. B) with A and B recursive, then there exists a recursively enumerable

function f such that A ~~ B (resp. A ~::. B) via f. 19.2.3. (Cai, Chen, Downey, and Fellows [110]) Let C be a class of functions

representing time (space) resource bounds. Recall from Part I, Chapter 5, that a (parameterized) language L is (uniform) C +advice if there is a (recursive) function gEe, an oracle Turing machine r, and a function w : N 1-+ E* (the advice function) such that

(x, k) E L iff rw(k)«(x, k) accepts,

and, furthermore, for all k and x, the running time (space) of rw(k)«(x, k) is ~ g(lxl}. Similarly, recall that (uniform)SLICEWISE C as those languages L accepted in time (space) f(k)g(lxl).

Verify the following fact from Chapter 5 (Exercise 7): (uniform) LOGSPACE + advice

=1= (uniform) SUCEWISE LOGSPACE.

[Hint: We only need to build a language whose k-th slice is accepted in space f(k) log lxi, where we get to build f. We need to construct f(k) so that we have enough room to diagonalize. We need the following fact. Claim. Suppose that L is in LOG SPA C E + ad vi ceo Then, there exists a

witness oracle Turing machine r accepting L running in space o (log Ixl) which has an advice function w recursive in 0', the Turing degree of the halting

problem.

First, prove the claim using the method of Lemma 19.20. Thus, it suffices to meet the following requirements.

Re,n :Either 3k(lim"(I{Je(k, S)(=def l{Je(k» does not exist), or 3x, k(<t>~,(k)«(X, k) does not run in space n.log(lx!) or 3x, k(<t>~,(k)«(x, k) #- L«(x, k).

In the above, (I{Je( , ), <t>e)eEtu is a simultaneous enumeration of all pairs consisting of an oracle Thring machine and a partial recursive binary function. We devote the (e, n)-th slice of L to meeting the requirement Re•n • We define f«e, n) = n + 1. We can assume (see Lemma 19.20) that if l{Je«(e, n), s) .J..:;6 l{Je«(e, n), t) .J.. and t > s, then for all u ::: t, l{Je«(e, n), u) :;6 l{Je«(e, n), s).

At stage s, on row (e, (e, n)) with I (e.n) :'S s we perform the following action. If we see l{Je.,,«(e, n), t) .J,. in :'S 10g(ls!) space and <t>~,«e,n).t)«(l", (e,n))) .J.. in:'S (n + l)log(s) space, with t :'S 10g(ls!), for the largest such t, let

L(W, (e, n))) = I + <t>~e«e,n),t)«(l", (e, n))),

This action will succeed in diagonaJizing the possibility of I{J. ( (e, n), t)

as being a value for Jim" (I{Je «e, n), s)), since either we will thus diagonalize a final value for l{Je«(e, n), or possible values will change infinitely often, or the running space will never be correct, or finally l{Je«(e, n) will be partial. In any case, we succeed in meeeting the R e•n result.]

434 19. Structure of Languages Under Parameterized Reducibilities

19.2.4. (Cai, Chen, Downey, and Fellows [110]) Extending the notation of Exercise 3, let L\ and L2 be parameterized problems. We say that L\ is (uniformly) SUCEWISE LOGSPACE reducible to L2 iff there is a procedure M and a (recursive) function f(k) such that for all z E 'E*, and all k E w,

(z, k) E L1iff M«(x, k)) accepts and runs in space f(k) log Izl.

Simiarly, we can define (uniform) LOGSPACE + advice reductions for M by asking for the existence of an oracle Turing machine r, and an advice function w so that for all x, k,

(x, k) E L1 iff rw (k)E!) L 2 «(x, k) accepts,

and r is running in space log Ix I. Let A 5; 'E*. Define N\ (A) = {(z, 0) : z E A} and N2(A) =

{(z, k) : z E A, k E w}. Note that N\ (A) == N2(A) for either of the reductions defined above. Also, let B be a parameterized problem. We can define the standardization of B to be S(B) = {((z, q), k) : (z, q) E Band q :s k}. Again, note that Band S(B) have the same parameterized complexity. Standardization ensures that the structure of a set is smooth in the sense that the k-th row codes all the information of the preceding rows.

Prove the following: (i) Let A be P-complete. Then, N1 (A) and N2(A) are complete for FPTunder (uniform) LOGSPACE + advice reductions. (ii) Furthermore, if B is complete for FPTunder (uniform) SUCEWISE LOGSPACE reductions, then for some k, for all k' ::: k, the k'th row of S(B) is P-complete. Hence, conclude that (strongly uniform) F PT = (uniform) SUCEWISE LOGSPACEiff P = LOGSPACE. (iii) (uniform) SUCEWISE LOGSPACEhas no problem complete under (uniform) LOGSPACE + advice reductions.

[Hint: This uses the technique of Exercise 3. If we had such a problem L, ac­cepted in space f (k) 10g(lx I) slicewise, then we could build L' not computable by L in space 2k f(k) 10g(lxl) since the additional2k multiplicative factor gives us plenty of extra space to diagonalize and no finite advice could lift f(k) to 2k f(k).]

19.2.5. Prove (i) and (ii) of Theorem 19.22; that is, prove that if q E {m, T},

then {e : We :s~ A} is a 'E3 set,

and {e : We recursive and W :s; A} is a 'E4 set.

19.2.6. Prove that {e : We ==~ 0} with q E {m, T} is 'E3-complete.

19.2.7. Prove that {e : We recursive and ==~ 0} for q E {m, T} and r E

{u, n} is 'E4-complete.

19.2 Results 435

19.2.8. (Cholak and Downey [126]) Call a language L row finite if Lk con­tains only finitely many elements for each k. Let q E {m, T}. Prove the following: (i) Suppose that I is an ideal (i.e., I is closed under join and initial segments) in (REC, :s~) and I is generated by {A; : i E Q} with

Q a ~~ set, and {A; : i E N} is a recursive collection of row finite recursive sets. Then, there is a collection C = {B; : i E M} of row finite recursive languages, with M recursive and such that I is equal to the ideal generated by C. (ii) Suppose that I is an ideal in (REC, :s:;') generated by {A; : i E Q} with Q a ~~ set, and {A; : i E N} a recursive collection of row finite languages. Then, there is a recursive collection C of row finite recursive sets with I equal to the ideal generated by C.

[Hint: Use the method of Theorem 19.23.]

19.2.9. (Cholak and Downey [126], Weak Exact Pairs) This question assumes that the reader is familiar with the speedup technique for finite and infi­nite injury arguments in the study of the polynomial-time degrees. (See for instance, Refs. [19,161], and [421].) The Weak Exact Pair Theorem of this question is used by Cholak and Downey [126] to encode count­able distributive lattices and, hence, deduce the undecidability of the elementary theory of the recursive languages under :s~ and :s~. This argument is along the lines but very much more difficult than the one of Ambos-Spies and Nies [22]. We remark that it is unknown if all ideals have exact pairs.

In the notation of Exercise 8, prove the following. Let {A; : i E

Q} be a ~~ collection of row finite recursive languages. Then, the ideal I of degrees in (REC, :s~) generated by this collection has an "exact pair"; that is, there is a pair of recursive languages C and D such that a E I iff a :S::, c, d, where c denotes the uniform q-degree of C and d the uniform q-degree of D.

[Hint: By Exercise 8, we may assume that the collection {Ai : i E Q} is recursive. We meet the requirements

Re.n:Either lim .• !Peek, s) =def !Peek) fails to exist for some k or 3.x, k(re{C; (x, k) or il,{D; (x, k) does not run in time !p,{k)lxln ),

or the use exceeds C(tp,(k)) or D(tp,(k))

or r,{C) =f il,{D),

or for some i, r,{C) ~: Efli:5iAi'

This time, we shall code Ai into Ci.i and Di.j for j E N; that is, we ensure that we meet

Pi : Ai.i =* Ci.i =* Di.i for all j.

Then the construction runs as follows. We can split the Pi into Pi.i for j E ro, trying to achieve Ai.i =* Ci.i =* Di.i . This is achieved by direct coding, subject to finite injury. We will have R •. n respect Pi•i , e.g., for i, j < (e, n).

436 19. Structure of Languages Under Parameterized Reducibilities

To combine the strategies, use the "speedup technique" of Downey [161], Shinoda-Slaman [421], or Ambos-Spies [20].

19.2.10. Modify the proof of Theorem 19.29(i) to construct oracles A and B with (i) FPT(~~)A =1= W[I]A,

(ii) FPT(~T)B =1= W[I]B.

19.2.11. (i) Prove that collapse propagates from strong to weak levels of uniformity; that is, prove that if, e.g., W[I] = F PT for ~~, then W[I] = F PT for ~i with q E {u, n}.

(ii) Construct oracles A and B with (a) W[p]A = F PTA for ~~ yet W[I]A =1= FPT A for ~~. (b) W[p]B = F PT B for ~T yet W[I]B =1= FPT B for ~~.

[Hint: For (a), code the diagonalization slices above the use of the point of diagonalization. Use a priority argument.]

19.2.12. (For those who know the methods of Bennett and Gill [55]) Prove that MONOTONE W[1]A =1= FPT A for a random oracle A.

19.2.13. (CholakandDowney [126]). (Again, the reader will need knowledge of the speedup technique at least to be able to make minimal pairs.) A natural question is whether there is an easy transference of results from (REC, ~~) to (REC, ~~). Cholak and Downey [126] showed thatthe obvious approach does not work, and the methods shed considerable light on the standard methods for getting results in (REC, ~~). The obvious approach is to take a given set A and define A' = {(x, 0) : x E

A}; that is, to concentrate A in row 0 of A'. The hope would be that if (e.g.) A and B form a minimal pair in (REC, ~~), then in (REC, ~::.), A' and B' would form a minimal pair. (There are obviously many other structural properties we can attempt to transfer in this way. We concentrate here on minimal pairs as a test case.) Cholak and Downey showed that this is not true in a very strong sense, as we now see. First, note that usually when we make minimal pairs for P, we meet the requirements

Re : <l>e(A) = re(B) then <l>e(A) is recursive.

[Here without loss of generality we let (<l> e, r e) : e E N denote a listing of all pairs of P -time procedures. See Exercise 16.] Suppose the running time is bounded by Ix In. What we actually do in all known constructions is to ensure that if <l>e(A) equals r.(B), then we can compute the answer via the empty string, and hence <l>e(A) is computable in time

Ixln. It follows that we would expect that re(A') would certainly be in F PT(~':n). Prove that even this is not necessarily the case; that is, prove the following.

(Cholak and Downey [126]) There exist recursive A and B such that

A and B form a PTIME minimal pair, yet A' and B' do not form a ~':n minimal pair.

19.2 Results 437

[(Hint. Build A and B and an auxiliary set Q ::s::, A', B' to meet the Re of a minimal pair requirement as well as the following.

Ne.n :Either rpe(k, s) has no limit for some k, or the running time for some k, x exceeds rpe (k) Ix In , or the use bound is exceeded, orfor some k, x, Q«x, k}) =1= A",,(k)(0; (x, k}).

Make A and B a minimal pair by the speedup technique. Meet the Ne•n on row (e, n). Let g = (e, n). Split the requirement into infinitely many subre­quirements that assert the above or that rpe(k, s) changes at least p times, but give all priority (e, n). Call this Rg.p and give it priority (g, pl. This will act if we have already seen Rg .p _ 1 act, and we get a new value for rpe(g, t) not previously seen and visible at stage s, and such that rpe(g, t)sn ::s sn+l. At this stage when N asserts control, it will find a very large follower x and declare that (x, g) E Q iff x E A iff x E B. Note that this forces us with priority (g, p) to put x into A iff we put x into B. Now, we do the obvious thing. Go to an appropriately huge stage and see if rp. (g, t) is still the same and the run­ning time and use are still okay, and if so, compute the value of Ae(0; (x, g).

Then, we promise to diagonalize against Q«x, g) using x via A and B. This is done with priority (e, n). As with all speedup arguments so long as this is not injured by higher-priority action, this will be the true action. Clearly, this action will meet the Ne.n • Why can we meet the Rk? Note that there are only a finite number of Nj of higher priority than Rk • Their only action is to declare certain configurations illegal and to make certain x enter A iff B. However, each action for this is bounded by time Ixln+1 for the relevant n for Nj • Thus, in time polynomially determined from the stage number and j, Rj can figure out whether the higher-priority requirements will assert control of the use of a computation, and what the answer will be. Otherwise, they will diagonalize as usual. Hence, the Rk are met. Finally, note that Q ::s::, A', B', since the action of an Rj is finite.]