20
On the solvability of sparse and dense 3-SAT instances directly by Packed State Stochastic Process Kavosh Havaledarnejad [email protected] Abstract: In this paper we review solvability of 3-SAT instances in sparse and dense cases directly using Packed Computation algorithms ( without reduction to 3-RSS ). Methods perhaps can use directly in solving k-SAT problem in general or (p, q)-CSP in sparse or dense cases when constrains are defined as conflicts between states of variables. In this paper we compare one randomized packed algorithm with Schöning algorithm and two new modern versions of Schöning's algorithms and show that our algorithms beats these modern algorithms only on random generated instances. We will use benchmarks that are available in SATLIB and show that algorithm is able to solve almost whole of them. It is probable that one prove that theoretical worst case complexity of this algorithm for 3-SAT is polynomial or prove is exponential. Keywords: Packed Computation, Randomized Algorithms, Packed State Stochastic Process, 3-SAT, Complexity, NP class, Random Walk 1. Introduction 3-SAT problem is a well-known NP-Complete problem that has received much attention in past four decades because a solution for this problem causes a solution for many other problems in computer science. One way to dealing this problem is Local Search and a type of local search that is known as Walk-Sat ( Random Walk Algorithm ). In 1991 Papadimitriou proved that Walk-Sat is able to solve 2- SAT problem in polynomial complexity ( In quadratic complexity ) [17]. In 1999 U. Schöning proved a complexity bound for 3-SAT problem using a Multi-Start Random-Walk algorithm [18]. Recently many efforts conducted to designing new Resolution Algorithms, Local Searches and Walk-Sat or to derandomize known randomized methods as deterministic algorithms ( see [9-16] ). They obtained better but yet exponential bounds for 3-SAT problem. A Walk-Sat algorithm first begins with a random truth assignment and continues in each play with choosing a clause that is not satisfiable and then select one literal uniformly at random from that clause and flip the value of literal. A multi-start walk-sat is a process that continue this process for a limited time and if couldn’t find the result will begin with a new random assignment. There are some new and modern versions of Random Walk algorithm for example like [19] that even beat the algorithms of winners of SAT Competition 2011. Our algorithm is very similar to Schöning's algorithm except that for every variable beside "True" and "False" states we have a new state: "Packed State". In this case variable is none-deterministic. When a variable is in True or False states we say it is in single state. When a variable is in a single state its priority is and when a variable is in packed state its priority is . Fig. 1 shows these states and their priority in a graph.

Dec 14 - R2

Embed Size (px)

Citation preview

Page 1: Dec 14 - R2

On the solvability of sparse and dense 3-SAT instances

directly by Packed State Stochastic Process

Kavosh Havaledarnejad [email protected]

Abstract:

In this paper we review solvability of 3-SAT instances in sparse and dense cases directly using

Packed Computation algorithms ( without reduction to 3-RSS ). Methods perhaps can use directly in

solving k-SAT problem in general or (p, q)-CSP in sparse or dense cases when constrains are defined

as conflicts between states of variables. In this paper we compare one randomized packed algorithm

with Schöning algorithm and two new modern versions of Schöning's algorithms and show that our

algorithms beats these modern algorithms only on random generated instances. We will use

benchmarks that are available in SATLIB and show that algorithm is able to solve almost whole of

them. It is probable that one prove that theoretical worst case complexity of this algorithm for 3-SAT

is polynomial or prove is exponential.

Keywords: Packed Computation, Randomized Algorithms, Packed State Stochastic Process, 3-SAT,

Complexity, NP class, Random Walk

1. Introduction

3-SAT problem is a well-known NP-Complete problem that has received much attention in past four

decades because a solution for this problem causes a solution for many other problems in computer

science. One way to dealing this problem is Local Search and a type of local search that is known as

Walk-Sat ( Random Walk Algorithm ). In 1991 Papadimitriou proved that Walk-Sat is able to solve 2-

SAT problem in polynomial complexity ( In quadratic complexity ) [17]. In 1999 U. Schöning

proved a complexity bound for 3-SAT problem using a Multi-Start Random-Walk

algorithm [18]. Recently many efforts conducted to designing new Resolution Algorithms, Local

Searches and Walk-Sat or to derandomize known randomized methods as deterministic algorithms (

see [9-16] ). They obtained better but yet exponential bounds for 3-SAT problem. A Walk-Sat

algorithm first begins with a random truth assignment and continues in each play with choosing a

clause that is not satisfiable and then select one literal uniformly at random from that clause and flip

the value of literal. A multi-start walk-sat is a process that continue this process for a limited time and

if couldn’t find the result will begin with a new random assignment. There are some new and modern

versions of Random Walk algorithm for example like [19] that even beat the algorithms of winners of

SAT Competition 2011.

Our algorithm is very similar to Schöning's algorithm except that for every variable beside "True" and

"False" states we have a new state: "Packed State". In this case variable is none-deterministic. When a

variable is in True or False states we say it is in single state. When a variable is in a single state its

priority is and when a variable is in packed state its priority is . Fig. 1 shows these states and

their priority in a graph.

Page 2: Dec 14 - R2

Fig. 1

Let us return to Schöning's 1999 algorithm. Consider a clause of form . We can consider this

clause as a conflict between three variable-states: . In this case

we say we have a conflict between those states of those variables ( Fig. 2 ).

Fig. 2

In Schöning's algorithm when system chooses such a clause selects one variable uniformly at random

and flips this variable to have a different state ( Here True ). In our algorithm when system chooses

such a clause first observe the priorities ( some priorities may be or ) . If all priorities be equal to

each other system select one variable uniformly at random. If two or one variables have fewer

priorities system chooses one of them ( that have fewer priority ) uniformly at random. Fig. 3-6 shows

these conditions in diagrams.

Fig. 3

Fig. 4

Page 3: Dec 14 - R2

Fig. 5

Fig. 6

And it remains to explain that what will happen when we choose a variable-state to flip. When we

choose a state of a variable that is in single state it will convert to packed state and when we choose a

state of a variable that is in packed state it will convert to single state ( but in deferent state ). Fig. 7

and Fig. 8 show these conditions.

Fig. 7

Fig. 8

Ok. One questions is that what will happens if algorithm terminates ( When there is any conflict in

system ) and we will have some variables in packed state. In this case the state of variables that have

an antiseptic value like true or false are the same and for variables that are in packed state we can

select every arbitrary state. It means that sometimes one time running algorithm we can extract more

than one satisfying assignment from the output of algorithm.

Packed Computation is a novel approach for problem solving. In prior paper [1] ( This paper takes

priority to current work but probably will publish after this work ), author showed that how random

generated instances of 3-RSS are tractable using Packed Computation algorithms ( It is not clear that

there are not scarce worst cases that are intractable ). Also we can reduce a 3-SAT instance to a 3-RSS

instance easily and then solve it. As far as reduced instance in dense case may have more dimensions

relative to prime problem, question is that can we use Packed Computation straightly to 3-SAT? This

paper is a reply to this question. One 3-SAT instance consists of variables and clauses. A 3-SAT

instance is a Boolean formula that is a ˄ operator on clauses :

Page 4: Dec 14 - R2

Each clause is a ˅ operator on literals :

Every literal is a variable or its negation.

{ } { }

The only thing that a clause does is canceling a case for 3 variables for example a clause in form

cancels the case: (It show a conflict

between 3 state note that 3-SAT is a special case of CSP problem see [8] ) thus we can model a 3-

SAT instance with a conflict matrix like a RSS-Instance with ⁄

connections or conflicts. For this research we only uses the instances that we are sure they have at

least one result. Methods perhaps can use directly in solving k-SAT problem in general or (p, q)-CSP

in sparse or dense cases when constrains are defined as conflicts between state of variables. But

researching regarding these cases is out of the goal of the paper. Expected number of clauses for a 3-

SAT problem containing variables with density is . It is obvious that

progress faster than . If we reduce a 3-SAT instance to a RSS instance number of rules will be

number of clauses in 3-SAT. Thus if we can solve a 3-RSS instance in time complexity , We can

solve a 3-SAT instance with complexity . Thus if complexity of solving a 3-SAT in a direct

way be less than this, it have benefit. In the rest of paper we will introduce one algorithm based on

packed computation and compare it with Schöning's 1999 algorithm [17].We can call this algorithm

. We will introduce two sub-versions for this algorithm. The

limitations of time that we considered for algorithm are polynomial and in this polynomial limitations

algorithm is able to solve almost whole benchmarks that are available in SATLIB.

2. States of variables

Whereas in 3-SAT problem variables are Boolean they have only two states true and false thus

there is only one packed state that is combination of true and false thus among execution of

algorithms variables only stand in T, F or TF ( Packed ) states. When a variable is in T state, priority

of this state is 1 and so on for F state but when a variable is in TF state, the priority of both T and F

states is ⁄ . In a classic walk-sat algorithm variables only stands in true or false states and transform

to each other directly (Fig. 1) but in a packed walk sat variables rotate in two cycles between packed

state and single states ( Fig. 2).

Page 5: Dec 14 - R2

(Fig. 9) Classic Local Search (Fig. 10) Packed Local Search

It remains to explain the treatment with clauses containing literals in packed state. If a clause contains

a true literal and other literals are in false or packed state we consider this clause as a satisfied clause.

But if a clause only contains false and packed literals we consider this clause as an unsatisfied clause.

we can show these conditions as bellow formulas:

2.1. Matrix Representation and generalization

We can represent the states of such a system with two dimensional matrixes like:

{ [ ] }

For PSSP-1 algorithm these states are only:

{ [ ] [

] [

⁄] }

We could choose some more complicated states like:

{ [ ] [

] [

⁄] [

⁄] }

Or:

{ [ ] [

] [

⁄] [

⁄] [

⁄] }

Or even real assignment like:

{ [ ] }

Researcher designed some algorithms based on these conditions but practical performance of these

conditions was not better that our simple PSSP-1 algorithm for random generated instances ( Please

note that random generated instances is not the case in general ). In addition aim of this work is only

Page 6: Dec 14 - R2

proposing packed computation as a powerful method. Thus in this paper we will focus on this simple

algorithm.

Consider a literal in an unsatisfied clause and we want to flip this literal. Whereas is unsatisfied

thus must be only or . If be will convert to and if be will convert to . Assume that we

can represent flip operator as a matrix thus we have:

Assume that:

[

]

Thus we have:

[

] [ ] [

⁄] [

] [ ⁄

⁄] [

]

Solving these equation we have:

[ ⁄ ⁄

⁄ ⁄]

Please note that we wrote this matrix for acting in a literal no a variable. If then matrix for

variable is the same but if then matrix for flip will be:

[ ⁄ ⁄

⁄ ⁄]

Please note that in classical Schöning algorithm states and flip matrices are in form:

[ ] [

] [

]

3. Schöning's Algorithm

Let us first introduce a well-known method ( see [18] ) for dealing with k-SAT problems. It is a

simple local search algorithm. Schöning established a powerful mathematical proof proving that

Expected Number of Steps for this algorithm working out a 3-SAT instance is at most bounded by

.

Algorithm 3.1. SCHÖNING'S

In this algorithm variables are always in T or F. if a clause have conflict and selected variable is T it

will convert to F and vice versa.

Page 7: Dec 14 - R2

4. Basic Action

For proposing our algorithm let us first introduce one basic action that is like flip in Schöning

algorithm. This basic action is LocalChange.

Action 4.1. LocalChange( l ) : in this action is a literal that is false or packed from a variable thus

can be or . If literal be in a single state converts to packed state . And if literal be in packed

state it will convert to true state ( Fig. 11 )

Fig. 11

5. PSSP-1

PSSP-1 is a randomized algorithm and is very similar to Schöning's algorithm. In this algorithm

after each restart we have step. In each step we choose a clause uniformly at random and if it

be an unsatisfied clause ( A clause that have conflict and whole of its literals are false or packed

means whole states that it cancels are on ) then we select this clause and if priority of some literals be

less than others then select one of less priorities uniformly at random and if priority of all be equal

select one literal of all uniformly at random ( Please note that for example for the case that only one

literal have less priority we have only one selection and we only choose it ) (Fig. 3-6 ).

Algorithm 5.1. PSSP-1.1

1- Repeat these command to convergence

2- Select an initial assignment 𝑎 { }𝑛 uniformly at random

3- Repeat 𝑛 times:

4- Let 𝐶 be an unsatisfied clause

5- Pick one literal 𝑙 of three literals in 𝐶

6- Flip the value of 𝑙

7- Check the correctness for termination

Page 8: Dec 14 - R2

In the first row of algorithm 5.1 we considered a restart for algorithm thus whereas in each restart

we have repeat, the total time that we considered for this algorithm is polynomial because along

the research algorithm was able to solve almost all instances with this limitations.

One question is that this algorithm has overhead because we first choose a clause uniformly at random

and then check that is this clause unsatisfied. We could select the clause from the list of unsatisfied

clauses uniformly at random but this approach also has overhead ( because we must save the list of

unsatisfied clause and manage it with an approach ) and in the case of this algorithm, this overhead is

bigger than the approached that used in PSSP-1 here for random generated instances. But at the end of

research it turned out that this approach is not suitable for hard benchmarks like SATLIB UF 20 – 250

thus we propose here algorithm 2:

Algorithm 5.2. PSSP-1.2

6. Two modern versions of Schöning's algorithm

Here we introduce two modern versions of Schöning's algorithm that we call them Break-Only

algorithms ( [19] ) and in the next session compare them with PSSP-1. These algorithms even beat the

algorithms of winners of SAT Competition 2011 ( See [19] ).

This algorithms are exactly Schöning's 1999 algorithm ( [18] ) means they are Multistart Random

Walk but have two difference:

1- Choosing an unsatisfied clause is not arbitrary and algorithm chooses one unsatisfied clause

from the list of all unsatisfied clauses uniformly at random.

2- When algorithm chooses a clause, selecting variable to flip is not uniformly at random and

algorithm calculates a probability distribution for this selection.

1- Repeat time

2- Select an initial assignment 𝑎 {𝑃}𝑛

3- Repeat 𝑛 time

4- Choose a clause 𝐶 uniformly at random

5- If 𝐶 is unsatisfied then:

6- Pick one literal 𝑙 of the less priority literals in 𝐶 uniformly at random

7- Do a LocalChange for 𝑙

8- Check the correctness

9- Repeat time

10- Select an initial assignment 𝑎 {𝑃}𝑛

11- Repeat 𝑛 time

12- Choose a clause 𝐶 from the list of unsatisfied clauses uniformly at random

13- Pick one literal 𝑙 of the less priority literals in 𝐶 uniformly at random

14- Do a LocalChange for 𝑙

15- Check the correctness

Page 9: Dec 14 - R2

When algorithm chooses a clause first calculates a function for each variable. Assume that variables

dealing the clause be then algorithm calculates . Probability distribution for

selecting a variable to flip obtains from:

For calculating this function for a variable , algorithm uses two parameters and .

Parameter is simply number of clauses that will become satisfied after flipping this variable

and is simply number of clauses that will become unsatisfied after flipping this variable. In

[19] Authors showed that best formulas are only based on because we omit calculating

that is an overhead. In [19] algorithms only based on had better performance relative

to algorithms based on both parameters or even winners of SAT Competition 2011.

First algorithm of [19] is . This algorithm calculates the function for variable

based on this formula:

Second algorithm from [19] is . This algorithm calculates the function for

variable based on this formula:

Based on [19] the best value of for first algorithm is 2.5 and for second is 2.3 ( For random

generated instances ).

However choosing a clause from list of unsatisfied clauses and calculating parameter have

overhead.

The simple method for implementation of these algorithms is that first we assess whole unsatisfied

clauses by a review on all clauses and then select one of them uniformly at random and for computing

parameter for a variable, we can do it by a review on all clauses. In another implementation

we used method of WalkSat43 [20] but in these algorithms, it leads to worse overhead relative to

simple method. Thus we used the original implementation of authors of [19]. Their method for the

case of 3-SAT manages the list of unsatisfied clauses along computation but for parameter: ,

calculates it directly whenever is needed. This method was the best.

7. Experimental Results

Before proposing experimental results let us first propose a hard instance namely Worse Case 4

(see [1] ) for 3-SAT problem that seems is hard to solve by Packed Computation algorithms.

Worse Case 4:

For producing a Worst Case 4 instance we first assume a Global Solution uniformly at random

and based on this assumption whole Variables have a state G that is in Global and a state N that is not

in Global. Also we select 3 of variables uniformly at random and call them X, Y and Z. For the

clauses that are exactly between X, Y and Z, we consider a conflict for all of them except the case that

whole states be in global ( NNN, NNG, NGG … but No GGG ). For other clauses that are not exactly

between X, Y and Z we produce whole clauses that exactly one or exactly two of states be in Global

Page 10: Dec 14 - R2

but we don’t produce the clauses that any or exactly tree state be in Global ( GNN, GGN … but No

GGG and NNN ).

7.1. An analysis on the runtime results

In this session complexity tables of algorithm in practice will proposes. We measure complexity

based on number of flips. We do these experiments on a sequence of sizes like

. These experiments give us a sequence of number of

flips ( average case or maximum case ) .

It is not clear that PSSP-1 has a polynomial or exponential complexity but here we assume a

polynomial formula for the complexity. In [1] it was mathematically analyzed that if we assume a

polynomial function for estimating an exponential function then: deviation must be very big or base of

exponential must be near to 1 ( unity ).

Let assume that, time complexity of system be in form . Thus for two consequence sizes we

have:

{

(

)( )

(

)( )

Thus we obtain a sequence of exponents . We can find estimated exponent by this formula:

(

) ∑

It give us estimated complexity we can show it like . But we can calculate a Deviation for it.

Deviation show how much practical exponents deviate from this estimation. We have:

(

) ∑

Tables 1 to 3 show experimental results for practical complexity of proposed algorithm

PSSP-1.1. From tables it is obvious that complexity of Packed Algorithm have polynomial

grow for random generated instances ( Note that we only tested random generated instances

and don’t speak regarding worst cases). Table 4 shows the complexity of PSSP-1.1 on Worse

Case 4.

PSSP-1 Density of clauses 0.01 Number of tests = 100

n AVG WRS Success n AVG WRS Success

50 384.8 1026 100% 300 1999.3 2932 100%

100 680.24 1452 100% 350 2375.26 3244 100%

150 1037.02 1748 100% 400 2656.16 3570 100%

200 1357.18 2134 100% 450 3008.96 4704 100%

250 1655.02 2636 100% 500 3378.1 4636 100%

AVG CPX = AVG DEV = 0.242 WRS CPX = WRS DEV = 0.413

(Table. 1) Practical time complexity for PSSP-1 on 3-SAT instances with clause density 0.01.

Page 11: Dec 14 - R2

PSSP-1 Density of clauses 0.1 Number of tests = 100

n AVG WRS Success n AVG WRS Success

35 230.24 745 100% 210 1396.86 2322 100%

70 489.34 1152 100% 245 1678.36 3243 100%

105 748.38 1659 100% 280 1894.92 2884 100%

140 983.62 1870 100% 315 2128.14 3901 100%

175 1223.82 2195 100% 350 2385.66 3768 100%

AVG CPX = AVG DEV = 0.341 WRS CPX = WRS DEV = 0.87

(Table. 2) Practical time complexity for PSSP-1 on 3-SAT instances with clause density 0.1

PSSP-1 Density of clauses 0.5 Number of tests = 100

n AVG WRS Success n AVG WRS Success

20 141.96 488 100% 120 806.26 1560 100%

40 283.66 902 100% 140 973.9 1720 100%

60 430.12 1228 100% 160 1088.76 2316 100%

80 572.38 1514 100% 180 1254.5 2114 100%

100 676.28 1104 100% 200 1330.6 2222 100%

AVG CPX = AVG DEV = 0.332 WRS CPX = WRS DEV = 1.043

(Table. 3) Practical time complexity for PSSP-1 on 3-SAT instances with clause density 0.5

PSSP-1 Worse Case 4 Number of tests = 100

n AVG WRS Success n AVG WRS Success

10 94.27 428 100% 60 1173.86 6368 100%

20 271 1092 100% 70 1657.42 7898 100%

30 541.62 2472 100% 80 1853.12 10836 100%

40 834.06 3252 100% 90 2437.42 8256 100%

50 1063.12 4004 100% 100 2348.56 7676 100%

AVG CPX = AVG DEV = 0.831 WRS CPX = WRS DEV = 1.265

(Table. 4) Practical time complexity for PSSP-1 on random instances of type worse case 4

7.2. SATLIB benchmarks

Also we tested algorithm in benchmarks that are available in SATLIB websites. We

restricted algorithm to restart. And in this polynomial limitation algorithm was able to

solve almost whole SATLIB benchmarks. We only tested the benchmarks that are all

satisfiable and show algorithm is able to solve almost whole of them with this polynomial

limitations. The whole time of these experiments was only several hours (Table. 5). The

number of successes of algorithm PSSP-1.1 on uf225-960 was different for .Net and C++

implementations even with repeating experiments. That author can explains why it may

Page 12: Dec 14 - R2

happens. In [21] Stephen A. Cook states that it is not clear that there is a source of random

number source in nature. Thus random functions in programming languages are only very

complicated mathematical functions. In fact random function of .Net is more random relative

to random function of C++ as random function of .Net depends on time and some dynamic

properties of system but random function of C++ is only a static sequence.

Instance Number of

successes

AVG MAX

uf20-91: 20 variables, 91 clauses - 1000

instances, all satisfiable 1000/1000 12042.76 141361

uf50-218: 50 variables, 218 clauses - 1000

instances, all satisfiable 1000/1000 203661.4 5628801

uf75-325: 75 variables, 325 clauses - 100

instances, all satisfiable 100/100 923551 16479001

uf100-430: 100 variables, 430 clauses -

1000 instances, all satisfiable 1000/1000 2956897.4 135569601

uf125-538: 125 variables, 538 clauses - 100

instances, all satisfiable 100/100 6394246 47516501

uf150-645: 150 variables, 645 clauses - 100

instances, all satisfiable 100/100 17705617 295828801

uf175-753: 175 variables, 753 clauses - 100

instances, all satisfiable 100/100 55255610.04 1058584801

uf200-860: 200 variables, 860 clauses - 100

instances, all satisfiable 99/100 53437009.16 919295201

uf225-960: 225 variables, 960 clauses - 100 instances, all satisfiable

C#:100/100

C++:98/100 155308604.14 1969192801

uf250-1065: 250 variables, 1065 clauses - 100 instances, all satisfiable 99/100 181326891.02 1761920705

RTI_k3_n100_m429: 100 variables, 429

clauses – 500 instances, all satisfiable 500/500 2620933.8 120121201

BMS_k3_n100_m429: 100 variables,

number of clauses varies – 500 instances,

all satisfiable 500/500 15689771.4 417664001

CBS_k3_n100_m403_b10: 100 variables,

403 clauses, backbone size 10 - 1000

instances, all satisfiable 1000/1000 730456.2 16711201

CBS_k3_n100_m418_b70: 100 variables, 1000/1000 1957353.8 59223601

Page 13: Dec 14 - R2

418 clauses, backbone size 70 - 1000

instances, all satisfiable

flat50-115: 50 vertices, 115 edges - 1000

instances, all satisfiable 1000/1000 11241223.2 199939801

flat75-180: 75 vertices, 180 edges - 100

instances, all satisfiable 100/100 110029135.12 1141353001

(Table. 5) Number of successes for solving SATLIB benchmark by PSSP-1 with polynomial

limitations

7.3. Comparing with Schöning algorithm

In this session we compare complexity of Schöning algorithm and PSSP-1 algorithm based

on "number of flips before convergence appears" and "number of variables". The instances

that used for these experiments were satisfiable random generated instances. In each size we

tested 100 instances and the instances were the same for Schöning algorithm and PSSP-1

algorithm. We measured the complexity of average and worst case for both algorithms. In

each diagram we considered a constant density that is probability of existence of a clause

when we are generating random instances.

Fig. 12-14 shows these experiments respectively on densities 0.1, 0.5 and 0.01. Diagrams of

Schöning algorithm are magenta ( worst case and average ) and diagrams of PSSP-1 are blue

( worst case and average ). It is obvious the diagram that is above is diagram of worst case

and diagram of below is for average practical complexity. Observing diagrams it is obvious

that performance of PSSP-1 algorithm is very better than Schöning's algorithm in practice.

These diagrams induce this concept that it is probable that these packed algorithms be

theoretically polynomial.

Page 14: Dec 14 - R2

Fig. 12. Comparison of PSSP-1 and Schöning algorithms in clause density 0.1

Fig. 13. Comparison of PSSP-1 and Schöning algorithms in clause density 0.5

Page 15: Dec 14 - R2

Fig. 14. Comparison of PSSP-1 and Schöning algorithms in clause density 0.01

7.4. Comparison with Break-Only algorithms

In this session we compare PSSP-1.1 with Break-Only algorithms from [19] that even beats

the winners of SAT Competition 2011. Number of flips of these algorithms is fairly less than

proposed algorithm PSSP-1. But however choosing an unsatisfied clause uniformly at

random and calculating parameter have big but polynomial overhead in these

algorithms thus we compared algorithms based on real runtime based on milliseconds. The

system was a core-I5 CPU with 6 Gigabytes of RAM. Platform was Visual C++ 2010 and

operating system was Windows Eight. Fig. 15-17 shows the result of these comparisons by

diagrams where red square diagram is and greed diamond diagram is

and blue circle diagram is . From the diagram it is obvious

that performance of is better on random generated instances.

Page 16: Dec 14 - R2

Fig. 15. Comparison of PSSP-1.1 and Break-Only algorithms in clause density 0.1

Page 17: Dec 14 - R2

Fig. 16. Comparison of PSSP-1.1 and Break-Only algorithms in clause density 0.5

Fig. 17. Comparison of PSSP-1.1 and Break-Only algorithms in clause density 0.01

Also we compares algorithm with algorithm on UF20 to

UF175 of SATLIB. (Fig. 18) show that is very weak in these benchmarks thus

we implemented an algorithm that choose the clause directly from the list of unsatisfied

clauses uniformly at random and changed the inner loop iterations to . Result of

comparing this new algorithm is available in (Fig. 19).

Page 18: Dec 14 - R2

Fig. 18. Comparing with in UF20 to UF-175

Page 19: Dec 14 - R2

Fig. 19. Comparing with in UF20 to UF-175

Finally we considered a time out of 10 second for algorithms and compared them on some

SATLIB benchmarks. Result of this competition is available in Table. 6.

Instance Successes of PSSP-1.1 Successes of Break-Only-Poly

uf200-860: 99 100

uf225-960: 98 100

uf250-1065: 98 97

(Table. 6) Comparing PSSP-1.2 and Break-Only-Poly with timeout 10 second.

7. Conclusion

Aim of this work was only proposing packed computation. In this article we proved in

practice that 3-SAT can be solved straightly using Packed Computation with good

performance. It seem solving 3-SAT straightly is better than reduction to 3-RSS. Methods

may uses directly in solving k-SAT and Constrain Satisfaction Problem in general.

Comparison with Schöning algorithm we show that this algorithm have exponential behavior

on our instances but solve these instances in a very slowly growing complexity

based on number of flips. We showed that algorithm has very better performance

on random generated instances relative to and

algorithms. However proving polynomiality or exponentiality of such packed computation

algorithms is an open question.

8. Acknowledgements

I thank Dr. Mohammad Reza Sarshar and Dr. Shahin Seyed Mesgary and Sina Mayahi in

university of Karaj and Professor Albert R. Meyer university of MIT and others who helped

me conducting these researches.

9. References

1-An introduction to Packed Computation as a new powerful approach to dealing NP-Completes,

Kavosh Havaledarnejad, ( Under Review in Theoretical Computer Science )

2- The Complexity of Theorem-Proving Procedures, Stephen A. Cook, University of Toronto 1971

3- Reducibility among Combinatorial Problems, Richard M. Karp, University of California at Berkley

1972

4- Probability and Computing Randomized Algorithms and Probabilistic Analysis, Michael

Mitzenmacher and Eli Upfal, Cambridge

5- Mathematics for Computer Science, Eric Lehman and F Thomson Leighton and Albert R Meyer

6- 21 NP-Hard problems, Jeff Erikson, 2009

7- Introduction to Probability Models Sixth Edition, Sheldon M. Ross, ACADEMIC PRESS San

Diego London Boston New York Sydney Tokyo Toronto

Page 20: Dec 14 - R2

8- Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint Satisfaction, David

Eppstein

9- R. Schuler, U. Schöning, and O. Watanabe. A probabilistic 3-SAT algorithm further improved. In:

Proceedings of the 19th Annual Symposium on Theoretical Aspects of Computer Science (STACS),

192-202, 2002.

10- D. Rolf. 3-SAT 2 RTIME(1:32971n). Diploma thesis, Department Of Computer Science,

Humboldt University Berlin, Germany, 2003.

11- S. Baumer and R. Schuler. Improving a probabilistic 3-SAT algorithm by dynamic search and

independent clause pairs. In: Proceedings of the 6th International Conference on Theory and

Applications of Satisability Testing (SAT), 150-161, 2003.121

12- D. Rolf. 3-SAT 2 RTIME(O(1:32793n)) - improving randomized local search by initializing

strings of 3-clauses. Electronic Colloquium on Computational Complexity (ECCC), 2003.

13- K. Iwama and S. Tamaki. Improved upper bounds for 3-SAT. In: Proceedings of the 15th Annual

ACM-SIAM Symposium on Discrete Algorithms (SODA), 328-328, 2004.

14- R. Paturi, P. Pudlak, M.E. Saks, and F. Zane. An improved exponential-time algorithm for k-SAT.

In: Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science (FOCS),

628-637, 1998.

15- D. Rolf. Derandomization of PPSZ for Unique-k-SAT. In: Proceedings of the 8th International

Conference on Theory and Applications of Satisability Testing (SAT), 216-225, 2005.

16- R. Paturi, P. Pudlak, M.E. Saks, and F. Zane. An improved exponential-time algorithm for k-SAT.

Journal of the Association for Computing Machinery 52(3): 337-364, 2006.

17- C. H. Papadimitriou: On selecting a satisfying truth assignment. Proc. 32nd FOCS (1991)

163–169.

18- U. Schöning: A probabilistic algorithm for k-SAT and constraint satisfaction problems.

Proc. 40th

, FOCS (1999) 410–414.

19- Balint, Adrian and Schöning, Uwe: Choosing Probability Distributions for Stochastic

Local Search and the Role of Make versus Break, In SAT 2012, 16–29. Springer.

20- Alex Fukunaga, Efficient Implementations of SAT Local Search

21- The P versus NP Problem, Stephen Cook, April, 2000. Manuscript prepared for the Clay

Mathematics Institute for the Millennium Prize Problems