8
Information Processing Letters 109 (2008) 46–53 Contents lists available at ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl The parameterized complexity of probability amplification J. Andrés Montoya Escuela de Matemáticas, Universidad industrial de Santander, Bucaramanga, Colombia article info abstract Article history: Received 29 March 2007 Accepted 13 June 2008 Available online 4 September 2008 Communicated by P.M.B. Vitányi Keywords: Computational complexity Parameterized complexity Probabilistic classes Probability amplification In this paper we study the parameterized complexity of probability amplification for some parameterized probabilistic classes. We prove that it is very unlikely that W [ P ] has the probability amplification property. © 2008 Elsevier B.V. All rights reserved. 1. Introduction In this paper we study the parameterized complexity of probability amplification, specifically we study the param- eterized complexity of probability amplification for the pa- rameterized class W [ P ]. It was proved, in Author’s forth- coming PhD thesis, that if the class W [ P ] has the prob- ability amplification property, then it is possible to prove a parameterized analogue of a theorem of Stockmeyer [10] claiming that approximate counting belongs to the polyno- mial hierarchy. While probability amplification is an easy task in the classical world, it is not the case in the parameterized world. It is an open problem if we can make amplifica- tion of probabilities for most of the parameterized classes defined using the parameterized BP operator. Furthermore, in the few positive cases, amplification of probabilities re- quires sophisticated techniques based on pseudorandom generators. 1.1. Parameterized complexity Parameterized complexity theory provides a framework for a refined analysis of hard algorithmic problems. Let us quote Flum and Grohe ([5], preface, page v). E-mail addresses: [email protected], [email protected]. Classical complexity theory analyses problems by the amount of a resource, usually time or space, that is re- quired by algorithms solving them. The amount of the resource required is measured as a function of the input size. Measuring complexity only in terms of the input size means ignoring any structural information about the input instances in the resulting complexity theory. Sometimes, this makes problems appear harder than they typically are. Parameterized complexity theory measures complexity not only in terms of the input size, but in addition in terms of a parameter, which is a numerical value that may depend on the input in an arbitrary way. The main intention of the theory is to address complexity issues in situations where we know that the parameter is comparatively small. A good example is the problem of evaluating database queries. From the classical point of view this problem is tractable only in very restrictive cases (the evaluation of conjunctive queries is already NP hard!). If one review the hardness proofs for the database evaluation problem, it is easy to note that it is necessary to consider instances where the size of the query nontrivially depends on the size of the database. In real life databases are huge and queries are small, it suggests that we can consider the size of the query as a parameter and measure the complex- ity of the problem in terms of two independent quantities, database size and query size, if we want to obtain some- 0020-0190/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.ipl.2008.06.007

The parameterized complexity of probability amplification

Embed Size (px)

Citation preview

Page 1: The parameterized complexity of probability amplification

Information Processing Letters 109 (2008) 46–53

Contents lists available at ScienceDirect

Information Processing Letters

www.elsevier.com/locate/ipl

The parameterized complexity of probability amplification

J. Andrés Montoya

Escuela de Matemáticas, Universidad industrial de Santander, Bucaramanga, Colombia

a r t i c l e i n f o a b s t r a c t

Article history:Received 29 March 2007Accepted 13 June 2008Available online 4 September 2008Communicated by P.M.B. Vitányi

Keywords:Computational complexityParameterized complexityProbabilistic classesProbability amplification

In this paper we study the parameterized complexity of probability amplification for someparameterized probabilistic classes. We prove that it is very unlikely that W [P ] has theprobability amplification property.

© 2008 Elsevier B.V. All rights reserved.

1. Introduction

In this paper we study the parameterized complexity ofprobability amplification, specifically we study the param-eterized complexity of probability amplification for the pa-rameterized class W [P ]. It was proved, in Author’s forth-coming PhD thesis, that if the class W [P ] has the prob-ability amplification property, then it is possible to provea parameterized analogue of a theorem of Stockmeyer [10]claiming that approximate counting belongs to the polyno-mial hierarchy.

While probability amplification is an easy task in theclassical world, it is not the case in the parameterizedworld. It is an open problem if we can make amplifica-tion of probabilities for most of the parameterized classesdefined using the parameterized BP operator. Furthermore,in the few positive cases, amplification of probabilities re-quires sophisticated techniques based on pseudorandomgenerators.

1.1. Parameterized complexity

Parameterized complexity theory provides a frameworkfor a refined analysis of hard algorithmic problems. Let usquote Flum and Grohe ([5], preface, page v).

E-mail addresses: [email protected], [email protected].

0020-0190/$ – see front matter © 2008 Elsevier B.V. All rights reserved.doi:10.1016/j.ipl.2008.06.007

Classical complexity theory analyses problems by theamount of a resource, usually time or space, that is re-quired by algorithms solving them. The amount of theresource required is measured as a function of the inputsize. Measuring complexity only in terms of the input sizemeans ignoring any structural information about the inputinstances in the resulting complexity theory. Sometimes,this makes problems appear harder than they typically are.Parameterized complexity theory measures complexity notonly in terms of the input size, but in addition in terms ofa parameter, which is a numerical value that may dependon the input in an arbitrary way. The main intention of thetheory is to address complexity issues in situations wherewe know that the parameter is comparatively small.

A good example is the problem of evaluating databasequeries. From the classical point of view this problem istractable only in very restrictive cases (the evaluation ofconjunctive queries is already NP hard!). If one review thehardness proofs for the database evaluation problem, itis easy to note that it is necessary to consider instanceswhere the size of the query nontrivially depends on thesize of the database. In real life databases are huge andqueries are small, it suggests that we can consider the sizeof the query as a parameter and measure the complex-ity of the problem in terms of two independent quantities,database size and query size, if we want to obtain some-

Page 2: The parameterized complexity of probability amplification

J.A. Montoya / Information Processing Letters 109 (2008) 46–53 47

thing new we have to consider a new (parameterized) no-tion of tractability.

The central notion of parameterized complexity theoryis fixed parameter tractability. It relaxes the classical notionof tractability, polynomial time solvability, by admitting al-gorithms whose nonpolynomial behavior is restricted onlyby the parameter, in addition the theory provide us with aparameterized intractability theory allowing us to prove theintractability of certain problems by classifying them intoparameterized complexity classes by means of suitable pa-rameterized reductions.

1.2. Relations to previous work

This paper is related to some previous work in param-eterized randomization and parameterized counting.

1.2.1. Probabilistic classes and probability amplificationProbabilistic computation is a major topic in Complex-

ity theory, there is a lot of work on the fine structure ofprobabilistic complexity classes. Parameterized probabilis-tic classes are introduced and studied, for the first time,in this research. We introduce parameterized probabilisticclasses through a parameterized analogue of the classicalBP operator.

There is some previous work in parameterized random-ization. In [2] a randomized fpt algorithm, which computeapproximations to some #W [1] hard counting problems,was presented and analyzed. In [1] the class RP · FPT wasimplicitly defined and the hypothesis RP · FPT = FPT wasused to show that resolution cannot be automatized. Itwas proved in [9], that any counting problem in #W [P ]can be probabilistic approximated in fpt time, if oracle ac-cess to p-WSAT(CIRC) is provided. This theorem is a weakparameterized analogue of Stockmeyer’s theorem. In [4] aweak parameterized analogue of the theorem of Valiantand Vazirani was proved. It was proved that, for any t � 1,if there exist suitable amplification algorithms, then thereis a fpt many-one randomized reduction of W [t] to UniqueW [t]. There are suitable fpt probability amplification al-gorithms? This is an important question which was leftopen in [4]. We cope, in this paper with a related problem:Given a parameterized class C , does C have the probabilityamplification property? Holding C the probability amplifi-cation property means that this class is robust, that is tosay, given L a problem in BP · C we can decrease its errorprobability by reducing L to some other problem in C . Inthis paper we focus our research on the class W [P ].

1.2.2. Counting complexityA typical class of computational problems is the class of

counting problems. Counting problems are at least as hardas decision problems, because if we can count the num-ber of solutions we can decide if there exists at least onesolution. Counting Complexity, the complexity analysis ofcounting problems, was developed by L. Valiant with a se-ries of ground breaking articles published in 1979 [12,13].Valiant proved that some counting problems are harderthan expected, he proved that the problem of counting thenumber of perfect matchings in a graph is #P complete,

this is surprising because the corresponding decision prob-lem, the problem of deciding if a graph has at least oneperfect matching, belongs to P . The big surprise came nextwhen S. Toda proved that every problem in the polynomialhierarchy can be reduced to any #P complete problem [11],that implies that every problem in the polynomial hierar-chy can be reduced to the problem of counting the numberof perfect matchings. Thus, we can conclude:

(1) Hard counting problems are much more difficult thanthe corresponding decision problems (if the polyno-mial hierarchy does not collapse).

(2) There are counting problems which are hard, althoughthe corresponding decision problems are tractable.

When we cope with counting problems we have thefollowing alternative: We can try to compute approximatesolutions instead of computing exact solutions. Approxi-mating a counting problem is easier than computing exactsolutions to the problem. Stockmeyer proved [10] that ap-proximating the problem #SAT can be done in probabilisticpolynomial time if oracle access to the decision problemSAT is provided, from this theorem Stockmeyer obtained,as a corollary, that approximate counting belongs to thesecond level of the polynomial hierarchy.

The problem considered in this paper arises in thestruggle for parameterized analogues of Toda’s and Stock-meyer’s Theorems. This is the case because if one try tomimic Toda’s proof (or Stockmeyer’s proof), one has toprove that some parameterized probabilistic classes de-fined using the BP operator have nice probability ampli-fication properties. Parameterized Counting Complexity isthe topic of Author’s forthcoming PhD thesis. In his thesisthe author has tried to obtain parameterized analogues ofsome structural theorems of classical Counting Complexity.To this end, he has introduced and studied some parame-terized operators analogous to the classical operators usedin the definitions of the classes of the polynomial hierarchyand in the definitions of most of the classical probabilis-tic classes. The operator calculus that arises from thesedefinitions is related to the operator calculus of Downeyand Fellows (see [3], Chapter 16). We consider that ourapproach is more naive than Downey–Fellow’s approachand because of this it has some technical advantages, butit is important to remark that, while our operator calcu-lus works well only for parameterized classes above W [P ],the calculus of Downey and Fellows can be used to studythe classes between FPT and W [P ], including the classesof the W hierarchy [3].

This paper is related to classical Counting Complexity(Toda’s work [11] and Stockmeyer’s work [10]). There arefew connections between this paper and previous work onparameterized Counting Complexity. Parameterized Count-ing Complexity is not yet a mature theory, there are fewworks on the topic [6,8], and no structural theorems likethe one of Toda or the one of Stockmeyer. It is important toremark that one of the main open problems in parameter-ized Counting Complexity is the proof of a parameterizedanalogue of Valiant’s Theorem on the complexity of count-ing matchings, however Flum and Grohe in [6] were ableto prove that the counting of cycles and paths is hard from

Page 3: The parameterized complexity of probability amplification

48 J.A. Montoya / Information Processing Letters 109 (2008) 46–53

the parameterized point of view although the correspond-ing decision problems are fix parameter tractable.

1.3. Organization of the paper

In Section 2 we introduce the basic concepts of param-eterized complexity theory.

Section 3 is the main section of the paper, it is dividedin three subsections.

In Section 3.1 we introduce some parameterized oper-ators, which are analogous to the classical operators, ∀,∃,

and BP, these operators allow us to define new parameter-ized classes from old ones. Using the operators ∀ and ∃ wecan define a hierarchy of parameterized classes analogousto the polynomial hierarchy. The BP operator allow us todefine probabilistic parameterized classes. Using the oper-ators BP and ∃ we define a parameterized analogue of theArthur Merlin class.

In Section 3.2 we study the probability amplificationproperties of parameterized classes defined by means ofthe BP operator. We prove that a parameterized class BP · Cis well behaved, i.e. has good probability amplificationproperties, if the ground class C is closed under param-eterized majority reductions, to prove this fact we have touse the pseudorandom generator of Ajtai et al. [7].

In Section 3.3 we prove that it is very unlikely that theparameterized Arthur–Merlin class has the probability am-plification property.

In Section 4 some conclusions are presented.

2. A technical preface

In this section we introduce the basic definitions of Pa-rameterized Complexity Theory, much more informationcan be found in [3] and in [5].

Notation 1. Σ∗ is the set of finite 0–1 words.

Definition 1. A parameterized problem is a subset ofΣ∗ × N.

Example 1. p-WSAT(CIRC) is the following parameterizedproblem

• Instances: (C,k), where C is a boolean circuit andk ∈ N.

• Parameter: k.• Problem: Decide if there exists a satisfying assignment

of C whose Hamming weight is equal to k.

The first important definition is the definition of effi-cient algorithm, efficient algorithms will be called fpt algo-rithms.

Definition 2. An fpt algorithm is an algorithm M such that,on input (x,k), the running time of M is upperboundedby f (k) · p(|x|), for some computable function f and somepolynomial p.

Now, we can use the notion of efficient algorithm to de-fine a suitable notion of parameterized feasible problems.

To this end we define the parameterized class FPT whoseelements are the parameterized feasible problems.

Definition 3. The parameterized class FPT is the class ofparameterized problems that can be decided using an fptalgorithm.

The next important definition is the notion of reducibil-ity that will be used in most places of the paper.

Definition 4. Given L, L∗ two parameterized languages,L is fpt many-one reducible to L∗ (L �fpt L∗), if and onlyif there exist a computable function g and an fpt algo-rithm M such that, on input (x,k) ∈ Σ∗ × N, the algo-rithm M computes a pair (x∗,k∗) that satisfies:

(1) k∗ � g(k).(2) (x,k) ∈ L if and only if (x∗,k∗) ∈ L∗ .

The following is a technical definition that will be usedto define the parameterized class W [P ], the parameterizedanalogue of NP.

Definition 5. Given L a parameterized problem, 〈L〉fpt :={L∗: L∗ �fpt L}.

Definition 6. W [P ] := 〈p-WSAT(CIRC)〉fpt .

The parameterized class W [P ] has a good machinecharacterization.

Definition 7. A W [P ] restricted Turing machine M is a Tur-ing machine such that:

(1) There exist a computable function f and a polynomialp such that, on every run of M with input (x,k), therunning time of M is upperbounded by f (k) · p(|x|).

(2) There exists a computable function g such that, on ev-ery run of M with input (x,k), the machine M guessesat most g(k) · log(|x|) nondeterministic bits.

Theorem 1. L ∈ W [P ] if and only if there exists a W [P ] re-stricted Turing machine M that decides L.

A proof of this theorem can be found in ([5], Theo-rem 3.14).

3. The hardness of probability amplification

This is the main section of the paper. To begin, we in-troduce the basic parameterized operators.

3.1. Basic definitions

In this section we introduce some basic parameterizedoperators. We introduce these operators to define someparameterized classes, these classes are analogous to theclassical classes defined by means of the classical opera-tors BP, ∃ and ∀.

Page 4: The parameterized complexity of probability amplification

J.A. Montoya / Information Processing Letters 109 (2008) 46–53 49

Notation 2. In the following, if it is clear from the context,{0,1} f will denote the set {0,1} f (k)·log(|x|) .

Definition 8. Given L a parameterized language and givenC a parameterized class (i.e. a set of parameterized lan-guages closed under fpt many one reductions).

(1) L ∈ ∃ · C if and only if there exist Ω ∈ C and a com-putable function f such that

(x,k) ∈ L ⇐⇒ ∃y ∈ {0,1} f ((x, y,k) ∈ Ω).

(2) L ∈ ∀ · C if and only if there exist Ω ∈ C and a com-putable function f such that

(x,k) ∈ L ⇐⇒ ∀y ∈ {0,1} f ((x, y,k) ∈ Ω).

(3) L ∈ BP · C if and only if there exist Ω ∈ C and a com-putable function f such that• (x,k) ∈ L ⇒ Pry∈{0,1} f [(x, y,k) ∈ Ω] � 3

4 .

• (x,k) /∈ L ⇒ Pry∈{0,1} f [(x, y,k) ∈ Ω] � 14 .

Remark 1. It follows from the machine characterization ofW [P ] that W [P ] = ∃ · FPT .

3.2. Majority reductions and probability amplification

A probabilistic parameterized class BP · C is well be-haved if BP · C has some type of probability amplification,i.e. BP · C is well behaved if given L ∈ BP · C we can de-crease the error probability associated to L by reducing Lto some other problem in the class C . Here we introducea formal notion of well behavedness that we call the pamproperty.

Definition 9. Given C a parameterized class, C has the pamproperty if and only if for all L ∈ BP · C and for all com-putable function g , there exist Ω ∈ C and a computablefunction f such that

• (x,k) ∈ L ⇒ Pry∈{0,1} f [(x, y,k) ∈ Ω] � 1 − 2−g(k) log(|x|) .

• (x,k) /∈ L ⇒ Pry∈{0,1} f [(x, y,k) ∈ Ω] � 2−g(k) log(|x|) .

In this section we explore the relation between the clo-sure of C under majority reductions and the probability-amplification properties of BP · C . We prove that if C ismaj-closed, then C has the pam property. The proof of thistheorem is very similar to the classical analogue, but inaddition we have to use in the proof the pseudorandomgenerator of Ajtai et al. [7] in order to save random bits.

Notation 3. From here on ⊗ will denote the boolean oper-ator Majority.

Definition 10. L is majority reducible to L∗ if and only ifthere exist an fpt algorithm M and two computable func-tions f , g such that, on input (x,k), the algorithm M com-putes a sequence

(x1,k1), . . . ,(x f (k) log(|x|),k f (k) log(|x|)

)that satisfies:

(1) (x,k) ∈ L ⇔ ⊗j� f (k) log(|x|)(x j,k j) ∈ L∗ .

(2) For all i � f (k) log(|x|) we have that ki � g(k).

We will say that C is maj-closed if and only if C isclosed under majority reductions.

The following theorem says us that in order to amplifythe success probability (equivalently, to decrease the errorprobability), we can make a big saving of random bits ifwe use a suitable pseudorandom generator.

Theorem 2 (AKS theorem). There exist an algorithm, namelyAKS, and constants N1, N2 ∈ N, such that for every m, i ∈ Nand for all a ∈ {0,1}N1(m+i) , on input (a, i,m), the algorithmAKS computes a sequence a1, . . . ,aiN2 ∈ {0,1}m that satisfies:Given A ⊂ {0,1}m

(1) |A| � 34 2m ⇒ Pra∈{0,1}N1(m+i) [⊗ j�iN2

a j ∈ A] � 1 − 2−i .

(2) |A| � 14 2m ⇒ Pra∈{0,1}N1(m+i) [⊗ j�iN2

a j ∈ A] � 2−i .(3) The running time of AKS is bounded by a polynomial

p(m, i).

Remark 2. The algorithm AKS is the pseudorandom gen-erator of Ajtai et al., which is based on expander graphs[7].

Using AKS theorem, we can easily prove the followingtheorem which says us that there exists a deep relationbetween the closure under majority reductions and proba-bility amplification.

Theorem 3. If C is maj-closed, then C has the pam property.

Proof. Let L, Ω be languages such that L ∈ BP · C, Ω ∈ Cand

• (x,k) ∈ L ⇒ Pry∈{0,1} f [(x, y,k) ∈ Ω] � 34 .

• (x,k) /∈ L ⇒ Pry∈{0,1} f [(x, y,k) ∈ Ω] � 14 ,

where f is some suitable computable function.Given g a computable function we define Ω g in the

following way

Ω g :={(x, y,k): y ∈ {0,1}N1( f +g) and

⊗j�N2 g(k) log(|x|)

(x, z j,k) ∈ Ω

},

where z1, . . . , zN2 g(k) log(|x|) is the output-sequence of thealgorithm AKS on input (y, g(k) log(|x|), f (k) log(|x|)).Ω g ∈ C because C is maj-closed and it follows from theAKS-Theorem that

• (x,k) ∈ L ⇒ Pry∈{0,1}N1( f +g) [(x, y,k) ∈ Ω g] �1 − 2−g(k) log(|x|) .

• (x,k) /∈ L ⇒ Pry∈{0,1}N1( f +g) [(x, y,k) ∈ Ω g] �2−g(k) log(|x|) .

Therefore, C has the pam property. �

Page 5: The parameterized complexity of probability amplification

50 J.A. Montoya / Information Processing Letters 109 (2008) 46–53

Corollary 1. FPT has the pam property.

Proof. FPT is closed under majority reductions. �Definition 11. L is conjunctive-reducible to L∗ if and onlyif there exist an fpt algorithm M and computable functionsf , g such that, on input (x,k), the algorithm M computesa sequence (x1,k1), . . . , (x f (k) log(|x|),k f (k) log(|x|)) that satis-fies:

(1) (x,k) ∈ L ⇔ ∧i� f (k) log(|x|)(xi,ki) ∈ L∗ .

(2) For all i � f (k) log(|x|) we have that ki � g(k).

We will say that C is ∧-closed if and only if for all L,

if there exists L∗ ∈ C such that L is conjunctive-reducibleto L∗ , then L ∈ C . Next proposition says that, if W [P ] is∧-closed, then W [P ] is maj-closed.

Proposition 1. If W [P ] is ∧-closed, then W [P ] is maj-closed.

Proof. Let L be a language in W [P ] and let Σ be a lan-guage which is maj-reducible to L. There exist an fpt algo-rithm M and two computable functions f and g such thatfor all (x,k)

(1) On input (x,k), the algorithm M computes a sequence

(x1,k1), . . . , (x f (k) log(|x|),k f (k) log(|x|)).

(2) (x,k) ∈ Σ ⇔ ⊗i� f (k) log(|x|)(xi,ki) ∈ L.

(3) For all i � f (k) log(|x|), we have that ki � g(k).

Let M be the following nondeterministic W [P ] re-stricted Turing machine:

On input (x,k)

(1) M simulates the computation of M on input (x,k).(2) Given (x1,k1), . . . , (x f (k) log(|x|),k f (k) log(|x|)), the output

of M on input (x,k), the machine M guesses v ∈{0,1} f (k) log(|x|) such that the Hamming weight of v islarger than f (k) log(|x|)/2.

(3) M computes the sequence {(xi,ki): vi = 1}.(4) M verifies that

∧i� f (k) log(|x|): vi=1(xi,ki) ∈ L.

We are allowed to suppose that M is a nondeterminis-tic W [P ] restricted Turing machine, since we are suppos-ing that W [P ] is ∧-closed, and it implies that the verifica-tion in step four can be carried out for a machine of thistype. Furthermore, it is easy to verify that (x,k) ∈ Σ if andonly if M accepts (x,k). Then, we have that Σ ∈ W [P ].Therefore, we have proven that W [P ] is maj-closed. �Corollary 2. If W [P ] is ∧-closed, then W [P ] has the pam prop-erty.

3.3. The parameterized Arthur–Merlin class BP · ∃ · FPT andprobability amplification

It was proved, in Author’s forthcoming PhD thesis, thatapproximate counting belongs to the third level of the

PH[P ] hierarchy (a parameterized analogue of the polyno-mial hierarchy), if the class BP ·∃·FPT has some strong formof probability amplification, i.e. if W [P ] has the pam prop-erty. We will show that it is very unlikely that W [P ] hasthe pam property.

Remember that if W [P ] is ∧-closed, then W [P ] has thepam property. In this section we prove a partial converse,we prove that if W [P ] is not closed under conjunctive re-ductions, then there exist languages in BP · ∃ · FPT[P ] thatcannot be amplified using some special type of algorithms.

We begin with some definitions.

Definition 12 (p-CLOGSAT).

• Instances: (C,k), where k ∈ N and C is a boolean cir-cuit whose number of input gates is less than or equalto k log(|C |).

• Parameter: k.• Problem: Decide if C is satisfiable.

It is easy to prove, using the k log n trick of Downey andFellows ([5], p. 52), that p-CLOGSAT is W [P ] complete.

Let N1, N2 be two natural numbers.

Definition 13. A (N1, N2)-sampling algorithm is an algo-rithm M such that for every y ∈ {0,1}N1k log(n) and forevery k ∈ N, on input (y,k), the algorithm M computesa sequence y1, . . . , yN2k log(n) of elements of {0,1}k log(n) .

Definition 14. The parameterized gap problem gap-p-CLOGSAT is the following problem

• Instances: (C(X, Y ),k), where k ∈ N and C(X, Y ) is aboolean circuit whose input gates are partitioned intwo blocks X, Y such that |X |, |Y | � k log(|C |).

• Parameter: k.• Yes-instances: (C(X, Y ),k) such that

Pry∈{0,1}k log(|C |)

[(C(X, y),k

) ∈ p-CLOGSAT]� 3

4.

• No-instances: (C(X, Y ),k) such that

Pry∈{0,1}k log(|C |)

[(C(X, y),k

) ∈ p-CLOGSAT]� 1

4.

It is straightforward to verify that gap-p-CLOGSAT ∈ BP ·∃ · FPT .

We will say that gap-p-CLOPGSAT is amplified by meansof the (N1, N2)-sampling algorithm M, if and only if

(1) For any Yes-instance (C(X, Y ),k)

Pry∈{0,1}N1k log(|C |)

[ ⊗i�N2k log(|C |)

((C(X, yi),k

) ∈ p-CLOGSAT)]

� 1 − 2− log(|C |).(2) For any No-instance (C(X, Y ),k)

Pry∈{0,1}N1k log(|C |)

[ ⊗i�N2k log(|C |)

((C(X, yi),k

) ∈ p-CLOGSAT)]

� 2− log(|C |).

Page 6: The parameterized complexity of probability amplification

J.A. Montoya / Information Processing Letters 109 (2008) 46–53 51

(3) The language {(C y(X, Y ),k): ϕ} belongs to W [P ],where y ∈ {0,1}N1k log(|C |), (C y(X, Y ),k) is equal to〈C(X, y1), . . . , C(X, yN2 log(|C |)),k〉 and ϕ is equal to:

(C(X, Y ),k

)is an instance of gap-p-CLOGSAT

(either a Yes-instance or a No-instance)

and⊗i�N2k log(|C |)

(C(X, yi),k

) ∈ p-CLOGSAT.

Item (3) will be very important in the proof of our mainresult. Note that items (1) and (2) can be fulfilled using asampling algorithm like the one of Ajtai et al. When wetry to amplify a language in BP · W [P ] (or in BP · C , whereC is some nondeterministic parameterized class), the diffi-culties arise during the voting stage, because in this stagewe are demanded to certify long sequences, and long se-quences seem to require long certificates (we could saythat conditions (1) and (2) do not matter because thereexist good pseudorandom generators).

Note that in the definition of sampling algorithm thereare not constraints concerning the running time. It is clearthat if one want to use a sampling algorithm to amplifyprobabilities in fpt time, one has to choose, to this end, anfpt sampling algorithm. We have not explicitly included, inthe definition of sampling algorithm, a running time con-dition, because the main result in this section rules out thepossibility of using a sampling algorithm to amplify gap-p-CLOGSAT , even if the sampling algorithm is time consum-ing.

Given D � 1, we define a parameterized language p-CLOGSAT D .

Definition 15 (p-CLOGSAT D ).

• Instances: (C1, . . . , C Dk log(|C1|),k, M), where k, M ∈ Nand for all i � Dk log(|C1|) we have that Ci is a circuitof size M whose number of input gates is less than orequal to k log M .

• Parameter: k ∈ N.• Problem: Decide if for all i � Dk log(M), the pair (Ci,k)

belongs to p-CLOGSAT .

Note that, if (C1, . . . , C Dk log(M),k, M) ∈ p-CLOGSAT D ,the size of their natural certificates, i.e. sequences (v1,

. . . , v Dk log(M)) of satisfying assignments, is equal to Dk2 ·log2(M), which is very big because of the term log2(M).Remember that, if a parameterized language L belongs toW [P ], then there exists a computable function f such thatfor all instance (x,k) of L, if (x,k) ∈ L, the instance (x,k)

can be certified using f (k) log(|x|) nondeterministic bits.For all D � 1, it is very unlikely that p-CLOGSAT D be-longs to W [P ], since it is very unlikely that there existsa computable function f such that we can certify a posi-tive instance (C1, . . . , C Dk log(M),k, M) of p-CLOGSAT D usingf (k) log(M) nondeterministic bits.

Remark 3. Note that if W [P ] is ∧-closed, then for allD ∈ N the language p-CLOGSAT D belongs to W [P ].

We will say that W [P ] has the democratic-pam propertyif and only if every language in BP · ∃ · FPT can be amplifiedby means of some sampling algorithm.

We are ready to prove the major result of the paper. Wewill prove that W [P ] has the democratic-pam property ifand only if W [P ] is ∧-closed.

Theorem 4. If gap-p-CLOGSAT can be amplified by meansof a sampling-algorithm, then there exist D � 1 such thatp-CLOGSAT D ∈ W [P ].

Proof. If we suppose that gap-p-CLOGSAT can be amplifiedby means of a sampling-algorithm, we can suppose with-out loss of generality that there exists N1, N2 ∈ N such thatgap-p-CLOGSAT is amplified by a (N1,2N2)-sampling algo-rithm. We will prove that p-CLOGSATN2 ∈ W [P ] (i.e. we setD = N2).

Let M be such an algorithm and let S := (C1,k), . . . ,

(CN2k log(|C1|),k) be a sequence, such that for all i �N2k log(|C1|), Ci is a circuit of size |C1| whose num-ber of input gates is less than or equal to k log(|C1|),that is (C1, . . . , CN2k log(|C1|),k, |C1|) is an instance of p-CLOGSATN2 . Given y ∈ {0,1}N1k log(|C1|) the algorithm M ,on input (y,k), computes a sequence y1, . . . , y2N2k log(|C1|) .From the pair (S, y) we can define a No-instance of gap-p-CLOGSAT in the following way:

Let

C y,S (X, Y ) =∨

i�2N2k log(|C1|)

(Gi(X) ∧ Y = yi

)

with:

• Gi(X) := Ci(X) if i � N2k log(|C1|).• Gi(X) := (X = X) if i = N2k log(|C1|) + 1.• Gi(X) := (X = X) if i � N2k log(|C1|) + 1.

It is clear that |C y,S (X, Y )| � |C1|. Note that

|X |, |Y | � k log(|C1|

)� k log

(∣∣C y,S (X, Y )∣∣)

that is, the pair (C y,S (X, Y ),k) is an instance of gap-p-CLOGSAT . We will prove that (C y,S (X, Y ),k) is a No-in-stance.

We have that, if |C1|k � 8N2k log(|C1|), then

Prv∈{0,1}k log(|C1 |)

[(C y,S (X, v),k

) ∈ p-CLOGSAT]

� Prv∈{0,1}k log(|C1 |)

[v ∈ {

y1, . . . , y2N2k log(|C1|)}]

� 1

4.

Thus, if |C1|k � 8N2k log(|C1|), the pair (C y,S (X, Y ),k) is aNo-instance of the language gap-p-CLOGSAT .

Now, we note that given y ∈ {0,1}N1k log(|C1|)⊗

i�2N2k log(|C1|)

((C y,S (X, yi),k

) ∈ p-CLOGSAT)

if and only if∧i�N2k log(|C1|)

(Ci,k) ∈ p-CLOGSAT

if and only if(C1, . . . , CN k log(|C |),k, |C1|

) ∈ p-CLOGSATN2 .

2 1
Page 7: The parameterized complexity of probability amplification

52 J.A. Montoya / Information Processing Letters 109 (2008) 46–53

Remember that, if gap-p-CLOGSAT is amplified by M, wecan decide if⊗i�2N2k log(|C1|)

((C y,S (X, yi),k

) ∈ p-CLOGSAT)

using a W [P ] restricted Turing machine. Hence, to verify if(C1, . . . , CN2k log(|C1|),k, |C1|

) ∈ p-CLOGSATN2

we can use a W [P ] restricted Turing machine that verifiesif ⊗i�2N2k log(|C1|)

((C y,S (X, yi),k

) ∈ p-CLOGSAT)

and uses this information to give us the correct an-swer. We can conclude that, if we could amplify gap-p-CLOGSAT by means of the sampling-algorithm M , thenp-CLOGSATN2 ∈ W [P ]. �Theorem 5. W [P ] is ∧-closed if and only if p-CLOGSAT1 ∈W [P ].

Proof. It is clear that if W [P ] is ∧-closed, then p-CLOGSAT1 ∈ W [P ], since p-CLOGSAT1 is conjunctive re-ducible to p-CLOGSAT .

Let L and Σ be two languages such that L ∈ W [P ] andΣ is conjunctive reducible to L. There exist an fpt algo-rithm M and two computable functions f and g such that

(1) For all (x,k), the pair (x,k) ∈ Σ if and only if∧i� f (k) log(|x|)

(xi,ki) ∈ L.

(2) For all i � f (k) log(|x|) we have that ki � g(k).

We can suppose, without loss of generality, that L isequal to p-CLOGSAT , since p-CLOGSAT is W [P ] complete. Itis easy to show, using an standard padding argument, thatthere exist a computable function h and an fpt algorithmM∗ such that, on input (x1,k1), . . . , (x f (k) log(|x|),k f (k) log(|x|)),the algorithm M∗ computes a sequence((

C1,h(k)), . . . ,

(Ch(k) log(|C1|),h(k)

))of boolean circuits that satisfies:

(1)∧

i� f (k) log(|x|)(xi,ki) ∈ p-CLOGSAT if and only if∧i�h(k) log(|C1|)(Ci,h(k)) ∈ p-CLOGSAT .

(2) For all i � h(k) log(|C1|) we have that Ci is a circuitof size |C1| whose number of input gates is upper-bounded by h(k) log(|C1|).

It is clear that, the composition of the algorithms Mand M∗ computes an fpt many one reduction of Σ intop-CLOGSAT1. Hence, we have that if p-CLOGSAT1 ∈ W [P ],then Σ ∈ W [P ]. Thus, we have proven that W [P ] is ∧-closed. �Corollary 3. W [P ] has the democratic-pam property if andonly if W [P ] is ∧-closed.

What is the meaning of our results? We have proventhat, if one wants to amplify probabilities of languages inBP · W [P ], using a sampling algorithm, there are two pos-sibilities.

(1) One can choose as sampling algorithm a good pseu-dorandom generator. In this case, if W [P ] is not ∧-closed, one will be unable to fulfill condition (3) inour definition of probability amplification by means ofa sampling algorithm.

(2) One can choose a sampling algorithm, which only out-puts unlikely sequences that can be certified using fewbits. In this case, one will be unable to cheat gap-p-CLOGSAT , i.e. one will be unable to amplify gap-p-CLOGSAT .

4. Conclusions

It is usual that arguments and techniques that workin the classical setting do not work in the parameterizedsetting, in the remaining cases the arguments have to bemodified and many technical details have to be considered.For example, while probability amplification is an easy taskin the classical setting, it is an open problem, for most ofthe parameterized probabilistic classes defined by meansof the BP operator, if we can amplify them. Furthermore,when we can amplify, we have to use sophisticated tech-niques based on pseudorandom generators.

In Author’s forthcoming PhD thesis the following theo-rems were proven:

(1) (parameterized Toda’s theorem) If ⊕·FPT is maj-closed,then

A ⊆ PH[P ] ⊆ ⟨p-#WSAT(CIRC)

⟩R ,

where A is the A hierarchy of parameterized complex-ity theory and 〈p-#WSAT(CIRC)〉R denotes the closureof p-#WSAT(CIRC) under parameterized random re-ductions.

(2) (parameterized Valiant–Vazirani theorem) If ⊕ · FPT is∨-closed, then W [P ] ⊆ BP · ⊕ · FPT .

(3) (parameterized Stockmeyer’s theorem) If W [P ] is ∧-closed, then approximate counting of problems in#W [P ] belongs to the second level of the PH[P ] hier-archy (the PH[P ] hierarchy, a parameterized analogueof the polynomial hierarchy, was introduced in Au-thor’s forthcoming PhD thesis).

Note that in each one of the theorems listed above wehave used an additional hypothesis concerning the closureof some parameterized class with respect to some specialtype of true table reductions. Why have we needed theseclosure assumptions? We have used these hypothesis toamplify probabilities. A natural question is the followingone: To what extent does probability amplification dependon these closure assumptions? This paper is a first stepinto this line of research, we strongly believe that thisquestion deserves further investigation.

Page 8: The parameterized complexity of probability amplification

J.A. Montoya / Information Processing Letters 109 (2008) 46–53 53

Acknowledgement

Thanks to Joerg Flum, thanks to VIE-UIS project num-ber 5153-07, thanks to an anonymous referee for helpfulsuggestions.

References

[1] M. Alekhnovich, A. Razborov, Resolution is not automatizable unlessW [P ] is tractable, in: Proceedings 42nd IEEE FOCS, 2001, pp. 210–219.

[2] V. Arvind, V. Raman, Approximation algorithms for some parameter-ized counting problems, in: P. Bose, P. Morin (Eds.), Proceedings ofthe 13th Annual International Symposium on Algorithms and Com-putation, in: Lecture Notes in Computer Science, vol. 2518, Springer-Verlag, 2002, pp. 453–464.

[3] R.G. Downey, M.R. Fellows, Parameterized Complexity, Springer-Verlag, 1999.

[4] R.G. Downey, M. Fellows, K. Regan, Parameterized circuit complexityand the W hierarchy, Theoretical Computer Sciences 191 (1998) 97–115.

[5] J. Flum, M. Grohe, Parameterized Complexity Theory, Springer-Verlag,2006.

[6] J. Flum, M. Grohe, The parameterized complexity of counting prob-lems, SIAM Journal of Computing 33 (4) (2004) 892–922.

[7] O. Goldreich, Randomized methods in computation, Manuscripthttp://www.wisdom.weizmann.ac.il/~oded/rnd.html, 2001.

[8] C. McCartin, Parameterized counting problems, in: K. Diks, W. Rytter(Eds.), Proceedings of the 27th International Symposium on Math-ematical Foundations of Computer Science Logic, in: Lecture Notesin Computer Science, vol. 2420, Springer-Verlag, 2002, pp. 556–567.

[9] M. Muller, Randomized approximations of parameterized countingproblems, in: Proceedings 2nd IWPEC, in: Lecture Notes in ComputerScience, vol. 4169, Springer-Verlag, 2006, pp. 50–59.

[10] L. Stockmeyer, On approximation algorithms for #P, SIAM Journal onComputing 14 (4) (1985) 849–861.

[11] S. Toda, PP is as hard as the polynomial-time hierarchy, SIAM Journalon Computing 20 (5) (1991) 865–877.

[12] L.G. Valiant, The complexity of computing the permanent, Theoreti-cal Computer Science 8 (1979) 189–201.

[13] L.G. Valiant, The complexity of enumeration and reliability problems,SIAM Journal on Computing 8 (3) (1979) 410–421.