View
219
Download
0
Tags:
Embed Size (px)
Citation preview
KNOWLEDGE REPRESENTATION & REASONING - SAT
1
SATProblem Definition
KR with SATTractable Subclasses
DPLL Search Algorithm
Slides by: Florent Madelaine Roberto Sebastiani
Edmund Clarke Sharad Malik
Toby Walsh Thomas Stützle Kostas Stergiou
KNOWLEDGE REPRESENTATION & REASONING - SAT
2
Material of lectures on SAT
SAT definitions Tractable subclasses
Horn-SAT 2-SAT
CNF
Algorithms for SAT DPLL-based
Basic chronological backtracking algorithm Branching heuristics Look-ahead (propagation) Backjumping and learning
Local Search GSAT WalkSAT Other enhancements
Application of SAT Planning as satisfiability Hardware verification
KNOWLEDGE REPRESENTATION & REASONING - SAT
3
Local Search for SAT
Despite the success of modern DPLL-based solvers on real problems there are still very large (and hard) instances that are out of their reach
Some random SAT problems really are hard! Chaff’s and the other complete solvers’ tricks don’t work so well here
Especially conflict-directed backjumping and learning
Local search methods are very successful on a number of such instances
they can quickly find models, if models exist …but cannot prove insolubility if no models exists
KNOWLEDGE REPRESENTATION & REASONING - SAT
4
k-SAT
Subclass of the SAT problem Exactly k variables in each clause
(P Q) (Q R) (R P) is in 2-SAT
3-SAT is NP-complete SAT can be reduced to 3-SAT
2-SAT is in P Exists linear time algorithm
KNOWLEDGE REPRESENTATION & REASONING - SAT
5
Experiments with 3-SAT
Where are the hard 3-SAT problems?
Sample randomly generated 3-SAT Fix number of clauses, l Number of variables, n By definition, each clause has 3 variables Generate all possible clauses with uniform probability
KNOWLEDGE REPRESENTATION & REASONING - SAT
6
Experiments with random 3-SAT
Which are the hard instances? around l/n = 4.3
What happens with larger problems?
Why are some dots red and others blue?
This is a so-called “phase transition”
KNOWLEDGE REPRESENTATION & REASONING - SAT
7
Experiments with random 3-SAT
Varying problem size, n
Complexity peak appears to be largely invariant of algorithm
complete algorithms like DPLL Incomplete methods like local
search
What’s so special about 4.3?
KNOWLEDGE REPRESENTATION & REASONING - SAT
8
Experiments with random 3-SAT
Complexity peak coincides with satisfiability transition
l/n < 4.3 problems under-constrained and SAT
l/n > 4.3 problems over-constrained and UNSAT
l/n=4.3, problems on “knife-edge” between SAT and UNSAT
KNOWLEDGE REPRESENTATION & REASONING - SAT
9
Phase Transitions
Similar “phase transitions” for other NP-hard problems job shop scheduling traveling salesperson exam timetabling constraint satisfaction problems
Was a “hot research topic” until recently: predict hardness of a given instance & use hardness to control search
strategy but real problems with structure are much different than random ones…
KNOWLEDGE REPRESENTATION & REASONING - SAT
10
Local Search Methods
Can handle “big” random SAT problems Can go up about 10x bigger than systematic solvers! Rather smaller than structured problems (espec. if ratio 4.26)
Also handle big structured SAT problems But lose here to best systematic solvers
Try hard to find a good solution Very useful for approximating MAX-SAT
the optimization version of SAT Not intended to find all solutions Not intended to show that there are no solutions (UNSAT)
KNOWLEDGE REPRESENTATION & REASONING - SAT
11
Local vs. Complete Search on Hard Random Instances
DPLL
“Hard” means on the phase transition
KNOWLEDGE REPRESENTATION & REASONING - SAT
12
Problem Formulation for Complete Search
INPUT: some initial state and a set of goal states BASIC ACTIONS: explore a successor of a visited state. PATH COSTS: sum of the step costs SOLUTION: a path from the initial state to one of the goal
states through the state space. OPTIMAL SOLUTION: a solution of lowest cost QUESTION: Find a (optimal) solution.
KNOWLEDGE REPRESENTATION & REASONING - SAT
13
Problem Formulation for Local Search
INPUT: some initial state BASIC ACTIONS: move from the current state to a
successor state. EVALUATION FUNCTION: gives the score of a state SOLUTION: state with the highest score.
KNOWLEDGE REPRESENTATION & REASONING - SAT
14
Local Search Techniques
Typically ignore large portions of the search space and are based on complete initial formulation of the problem
deterministic: (steepest) Hill Climbing (greedy local search) Local beam search Tabu search
stochastic: Stochastic Hill Climbing Simulated annealing Genetic algorithms Ant colony optimisation
They have one important advantage: they work on large instances because they only keep track of few states.
KNOWLEDGE REPRESENTATION & REASONING - SAT
15
Steepest Hill Climbing
starts from initial statewhile there is a better successor do
move to the best successorend while
This method has one important drawback: it may return a bad solution because it can get stuck on a local optimum and is very dependent of the initial state
a local optimum is a state where there is no successor better than the current state, but there are other states that are better
KNOWLEDGE REPRESENTATION & REASONING - SAT
16
Problems with Hill Climbing
Foothills / Local Optimal: No neighbor is better, but not at global optimum. (Maze: may have to move AWAY from goal to find (best)
solution) Plateaus: All neighbors look the same.
(8-puzzle: perhaps no action will change # of tiles out of place)
Ridge: going up only in a narrow direction. Suppose no change going South, or going East, but big
win going SE Ignorance of the peak: Am I done?
KNOWLEDGE REPRESENTATION & REASONING - SAT
17
Improving the Hill Climbing Procedure
Using random restarts when stuck start again from another initial state
By allowing sideways move when stuck move to successor states worth the same as the current one
By allowing some limited memory avoid states that will get you stuck (tabu search)
By enlarging the size of the neighbourhood change successor function to generate more states
By allowing transition to worse states when stuck allow moves to successor states worse than the current one
KNOWLEDGE REPRESENTATION & REASONING - SAT
18
Local Search for SAT
Choose a random initial state I i.e. a random assignment of 0 or 1 to all variables
If I is not a model, repeatedly choose a variable p and change its value in I (flip the variable)
flip(I,p) determines the successor state
The flipped variables are chosen using heuristics or randomly, or both
the evaluation function is usually the number of satisfied clauses in a state most algorithms use random restart if no solution has been found after a
fixed number of search steps
Several instantiations of this general framework have been proposed
KNOWLEDGE REPRESENTATION & REASONING - SAT
19
Definitions
Positive/Negative/Net Gain Given candidate variable assignment T for CNF formula F, let B0
be the total # of clauses that are currently unsatisfied in F. Let T’ be the state of F if variable V is flipped. Let B1 be the total # of clauses which would be unsatisfied in T’.
The net gain of V is B0-B1. The negative gain of V is the # of clauses which are satisfied in T but unsatisfied in T’. The positive gain of V is the # of clauses which are unsatisfied in T but satisfied in T’.
Variable Age The age of a variable is the # of flips since it was last flipped.
KNOWLEDGE REPRESENTATION & REASONING - SAT
20
GSAT & GWSAT
The GSAT Algorithm (Selman, Mitchell, Levesque, 1992) search initialisation: randomly chosen assignment in each search step, flip the variable which gives the maximal increase in the
number of satisfied claused (maximum net gain) ties are broken randomly
if no model found after maxSteps steps, restart from another randomly chosen assignment
HSAT (Gent and Walsh, 1993) – Same as GSAT, but break ties in favor of maximum age variable.
The GWSAT Algorithm (Selman, Kautz, Cohen, 1994) search initialisation: randomly chosen assignment Random Walk step: randomly choose a currently unsatisfied clause and a literal in
that clause, flip the corresponding variable to satisfy this clause GWSAT steps: choose probabilistically between a GSAT step and a Random
Walk step with ‘walk probability’ (noise) π
KNOWLEDGE REPRESENTATION & REASONING - SAT
21
WalkSAT
TheWalkSAT algorithm family search initialisation: randomly chosen assignment search step
1) randomly select a currently unsatisfied clause
2) select a literal from this clause according to a heuristic h if no model found after maxSteps steps, restart from randomly chosen
assignment Original Walksat (Selman, Kautz, Cohen 1994)
Pick random unsatisfied clause BC. If any variable in BC has negative gain of 0, randomly select one. Otherwise, with probability p, select random variable from BC to flip, and
with probability (1-p), select variable in BC with minimal negative gain (break ties randomly).
KNOWLEDGE REPRESENTATION & REASONING - SAT
22
WalkSAT
Beating a correct, optimized implementation of Walksat is difficult.
Many variants of “walksat” appear in literature Many of them are incorrect interpretations/ implementations of
[Selman, Kautz, Cohen] Walksat. Most of them perform significantly worse. None of them performs better.
Positive gain: #clauses that were unsatisfied and now become satisfied
Negative gain: #clauses that were satisfied and now become unsatisfied
KNOWLEDGE REPRESENTATION & REASONING - SAT
23
Variations of WalkSAT
SKC: Select variable such that minimal number of currently satisfied clauses become unsatisfied by flipping; if ‘zero-damage’ possible, always go for it; otherwise, with probability wp variable is randomly selected.
Tabu: Select variable that maximises increase in total number of satisfied clauses when flipped; use constant length tabu-list for flipped variables and random tie-breaking.
Tabu search prevents returning quickly to same state. To implement: Keep fixed length queue (tabu list). Add most recent step to queue; drop oldest step. Never make step that's on current tabu list.
Example: without tabu:
flip v1, v2, v4, v2, v10, v11, v1, v10, v3, ... with tabu (length 4)—possible sequence:
flip v1, v2, v4, v10, v11, v1, v3, ...
KNOWLEDGE REPRESENTATION & REASONING - SAT
24
SAT Local Search: Novelty Family
Novelty [McAllester, Selman, Kautz 1997] Pick random unsatisfied clause BC. Select variable v in BC with maximal net gain, unless v has the minimal age in
BC. In the latter case, select v with probability (1-p); otherwise, flip v2 with 2nd
highest net gain. Novelty+ [Hoos and Stutzle 2000] – Same as Novelty, but after BC is
selected, with probability pw, select random variable in BC; otherwise continue with Novelty
R-Novelty [McAllester, Selman, Kautz 1997] and R-Novelty+ [Hoos and Stutzle 2000]– similar to Novelty/Novelty+, but more complex.
KNOWLEDGE REPRESENTATION & REASONING - SAT
25
Adaptive Novelty+ (current winner) with probability 1% (the “+” part)
Choose randomly among all variables that appear in at least one unsatisfied clause else be greedier (other 99%)
Randomly choose an unsatisfied clause C Flipping any variable in C will at least fix C
Choose the most-improving variable in C … except … with probability p, ignore most recently flipped variable in C
The “adaptive” part: If we improved # of satisfied clauses, decrease p slightly If we haven’t got an improvement for a while, increase p If we’ve been searching too long, restart the whole algorithm
(the “novelty” part)
KNOWLEDGE REPRESENTATION & REASONING - SAT
26
Structure of the standard SAT locals search variable selection heuristics
Score variables with respect to gain metric Walksat uses negative gain, GSAT and Novelty use net gain.
Restricted set of candidate variables GSAT – any var in formula; Walksat/Novelty – pick variable from a single random unsatisfied
clause Ranking of variables with respect to scoring metric
Greedy choices considered by all heuristics. Novelty variants also consider 2nd best variable.
Variable Age – prevent cycles and force exploration Novelty variants, HSAT, Walksat+tabu
KNOWLEDGE REPRESENTATION & REASONING - SAT
27
Some observations on the variable selection heuristics
Difficult to determine a priori how effective any given heuristic is. Many heuristics look similar to Walksat, but performance varies
significantly (c.f. [McAllester, Selman, Kautz 1997] Empirical evaluation necessary to evaluate complex heuristics.
Previous efforts to discover new heuristics involved significant experimentation.
Humans are skilled at identifying primitives, but find it difficult/time-consuming to combine the primitives into effective composite heuristics.
KNOWLEDGE REPRESENTATION & REASONING - SAT
28
Simulated Annealing (popular general local search technique – often very effective, rather slow)
Simulated Annealing is an SLS method that tries to avoid local optima by accepting probabilistically moves to worse solutions.
Simulated Annealing was one of the first SLS methods now a "mature" SLS method many applications available (ca. 1,000 papers) (strong) convergence results simple to implement inspired by an analogy to physical annealing
KNOWLEDGE REPRESENTATION & REASONING - SAT
29
Simulated Annealing description
Given a solution s generate some neighboring solution s’ N(s)
by picking a variable at random and flipping it
if f(s’) f(s) then accept s’ if f(s’) > f(s) then a probabilistic yes/no decision is made
if outcome is yes, then s’ replaces s if outcome is no, then s is kept
probabilisitic decision depends on the difference f(s’) - f(s) a control parameter T, called temperature
KNOWLEDGE REPRESENTATION & REASONING - SAT
30
Simulated Annealing description
Pick a variable at random
If flipping it improves assignment: do it.
Else flip anyway with probability p = e-/T where = f(s’) - f(s) (i.e. damage to score)
What is p for =0? For large ?
T = “temperature” What is p as T tends to infinity? Higher T = more random, non-greedy exploration As we run, decrease T from high temperature to near 0.
KNOWLEDGE REPRESENTATION & REASONING - SAT
31
Simulated Annealing Algorithm
KNOWLEDGE REPRESENTATION & REASONING - SAT
32
Simulated Annealing general issues
generation of neighboring solutions often: generate a random neighboring solution s’ N(s) possibly better: systematically generate neighboring solutions => at
least we are sure to sample the whole neighbourhood if no move is accepted
acceptance criterion often used: Metropolis acceptance criterion
If f(s’) f(s) then accept s’ if f(s’) > f(s) then accept it with a probability of e-/T ,
where = f(s’) - f(s)
KNOWLEDGE REPRESENTATION & REASONING - SAT
33
Simulated Annealing cooling schedule
open questions how to define the control parameter? how to define the (inner and outer) loop criteria?
cooling schedule initial temperature T0
(Example: base it on some statistics about cost values, acceptance ratios etc.) temperature function — how to change the temperature
(Example: geometric cooling, Tn+1 = aTn, n=0,1,… 0<a<1) number of steps at each temperature (inner loop criterion)
(Example: multiple of the neighbourhood size) termination criterion (outer loop criterion)
(Example: no improvement of sbest for a number of temperature values and acceptance rate below some critical value)
KNOWLEDGE REPRESENTATION & REASONING - SAT
34
Simulated Annealing theoretical results
if SA can be run long enough with an infinite number of temperature values and for each temperature value with an infinite number of steps
one can be sure to be at an optimal solution at the end however, it is not clear what end means in addition, when run at a single temperature level long
enough we can be sure to find the optimum solution hence, an optimal solution can be found without annealing one only needs to store the best solution found and return it at the end
BUT: we need n -> to guarantee optimality
KNOWLEDGE REPRESENTATION & REASONING - SAT
35
Simulated Annealing theoretical results
From theoretical proofs we can also conclude that better solutions are becoming more likely
this gives evidence that after a sufficient number of temperature values a sufficient number of steps at each temperature
chances are high to have seen a good solution however, it is unclear what sufficient means
KNOWLEDGE REPRESENTATION & REASONING - SAT
36
Many local searches at once Consider 20 random initial assignments Each one gets to try flipping 3 different variables
(“reproduction with mutation”) Now we have 60 assignments Keep the best 20 (“natural selection”), and continue
Evolutionary algorithms (another popular general technique)
KNOWLEDGE REPRESENTATION & REASONING - SAT
37
1 0 1 0 1 1 1
1 1 0 0 0 1 1
Parent 1
Parent 2
1 0 1 0 0 1 1
1 1 0 0 1 1 0
Child 1
Child 2 Mutation
Reproduction (another popular general technique – at least for evolutionary algorithms)
Derive each new assignment by somehow combining two old assignments, not just modifying one (“reproduction” or “crossover”)
Good idea?
KNOWLEDGE REPRESENTATION & REASONING - SAT
38
Dynamic Local Search
Dynamic local search is a collective term for a number of approaches that try to escape local optima by iteratively modifying the evaluation function value of solutions.
different concept for escaping local optima several variants available promising results
KNOWLEDGE REPRESENTATION & REASONING - SAT
39
Dynamic Local Search
guide the local search by a dynamic evaluation function evaluation function h(s) composed of
cost function f(s) penalty function
penalty function is adapted at computation time to guide the local search
penalties are associated to solution features related approaches:
long term strategies in tabu search noising method usage of time-varying penalty functions for (strongly) constrained problems etc.
KNOWLEDGE REPRESENTATION & REASONING - SAT
40
Dynamic Local Search issues
timing of penalty modifications at every local search step only when trapped in a local optimum long term strategies for weight decay
strength of penalty modifications additive vs. multiplicative penalty modifications amount of penalty modifications
focus of penalty modifications choice of solution attributes to be punished
KNOWLEDGE REPRESENTATION & REASONING - SAT
41
Example: Guided Local Search (GLS) guided local search (GLS)
modifies penalties when trapped in local optima
variants exist that use occasional penalty decay
uses additive penalty modifications
chooses few among the many solution components for punishment
KNOWLEDGE REPRESENTATION & REASONING - SAT
42
GLS - details
KNOWLEDGE REPRESENTATION & REASONING - SAT
43
GLS - details
KNOWLEDGE REPRESENTATION & REASONING - SAT
44
GLS for SAT
uses the GSAT architecture for local search uses in addition a special tie-breaking criterion that favors
flipping a variable that was flipped the longest time ago if in x consecutive iterations no improved solution is found,
then modify penalties solution attributes are clauses when trapped in a local optimum, add penalties to clauses of
maximum utility which clauses are these?
KNOWLEDGE REPRESENTATION & REASONING - SAT
45
GRASP
Greedy Randomized Adaptive Search Procedures (GRASP) is an SLS method that tries to construct a large variety of good initial solutions for a local search algorithm.
predecessors: semi-greedy heuristics tries to combine the advantages of random and greedy solution
construction
KNOWLEDGE REPRESENTATION & REASONING - SAT
46
Greedy construction heuristics
Iteratively construct solutions by choosing at each construction step one solution component
solution components are rated according to a greedy function the best ranked solutions component is added to the current partial
solution examples: Kruskal’s algorithms for minimum spanning
trees, greedy heuristic for the TSP, advantage: generate good quality solutions; local search
runs fast and finds typically better solutions than from random initial solutions
disadvantage: do not generate many different solutions; difficulty of iterating
KNOWLEDGE REPRESENTATION & REASONING - SAT
47
Random vs. greedy construction
random construction high solution quality variance low solution quality
greedy construction good quality low (no) variance
goal: exploit advantages of both
KNOWLEDGE REPRESENTATION & REASONING - SAT
48
Semi-greedy heuristics
add at each step not necessarily the highest rated solution component
repeat until a full solution is constructed: rate solution components according to a greedy function put high rated solution components into a restricted candidate list (RCL) choose one element of the RCL randomly and add it to the partial
solution adaptive element: greedy function depends on the partial solution
constructed so far
KNOWLEDGE REPRESENTATION & REASONING - SAT
49
Generation of the RCL
KNOWLEDGE REPRESENTATION & REASONING - SAT
50
GRASP
GRASP tries to capture advantages of random and greedy solution construction
iterate through randomized solution construction exploiting a greedy probabilistic bias
to construct feasible solutions
apply local search to improve over the constructed solution keep track of the best solution found so far and return it at
the end
KNOWLEDGE REPRESENTATION & REASONING - SAT
51
GRASP – local search
local search from random solutions
high variance best solution quality often better than
greedy (if not too large instances) average solution quality worse than greedy local search requires many
improvement steps local search from greedy
solutions average solution quality better than
random local search typically requires only a
few improvement steps low (no) variance
KNOWLEDGE REPRESENTATION & REASONING - SAT
52
GRASP for SAT
KNOWLEDGE REPRESENTATION & REASONING - SAT
53
GRASP for SAT
KNOWLEDGE REPRESENTATION & REASONING - SAT
54
Local Search Summary
Surprisingly efficient search technique Many variants proposed
Wide range of applications Not only SAT
Formal properties elusive Especially for randomized local search
Intuitive explanation: Search spaces are too large for systematic search anyway. . .
Area will most likely continue to thrive
KNOWLEDGE REPRESENTATION & REASONING - SAT
55
Dynamic SAT
The SAT framework is successful in modeling and solving numerous practical problems
AI (scheduling and planning, resource allocation, timetabling, temporal reasoning, etc.)
Hardware and software verification, model checking
Some assumptions are made: All components of the problem to solve (variables, clauses) are
completely known before modeling and solving and do not change during or after this process that is, the problems are static
KNOWLEDGE REPRESENTATION & REASONING - SAT
56
Is this realistic?
Example: on-line planning and scheduling
These assumptions do not hold in uncertain and dynamic environments
KNOWLEDGE REPRESENTATION & REASONING - SAT
57
Difficulties with on-line solving
Difficulties in modeling and solving in such environments are due to two facts: Knowledge about the real world is often incomplete, imprecise, and
uncertain The real world and the knowledge about is may change during or after
modeling and solving
These difficulties make the use of the standard SAT framework infeasible
KNOWLEDGE REPRESENTATION & REASONING - SAT
58
Travel Management System
A TMS is embedded in a car and manages all features of long travel Route, stops, rendezvous, refueling, maintenance physical system -> car user -> car driver physical environment -> road, traffic, weather, etc. other entities -> hotels, people, garages, other TMSs
Uncertainty may come from car, environment, and other entities
Changes may occur at any time from driver, car, environment, and other entities
KNOWLEDGE REPRESENTATION & REASONING - SAT
59
Uncertainty and Change
Present not only in planning and scheduling Online computer vision, failure diagnosis, and situation tracking Computer-aided system design and configuration Interactive or distributed problem solving
Uncertainties about the decisions of the users or the other entities (agents) Financial portfolio management
Uncertainties about the prices of stocks Etc.
KNOWLEDGE REPRESENTATION & REASONING - SAT
60
Requirements in uncertain and dynamic situations
Limit as much as possible the need for repeated solving time consuming and disturbing for the user (as new solutions keep
arriving)
Limit as much as possible the changes in the produced solutions when the current one is no longer valid important changes are undesirable
imagine completely changing the route because it has started to rain
KNOWLEDGE REPRESENTATION & REASONING - SAT
61
Requirements in uncertain and dynamic situations
Limit as much as possible the computing time and resources Often the utility of the solution decreases with the time of its delivery
Telling the driver to exit the highway after the exit was passed is useless
Keep producing consistent and (if possible) optimal solutions Hard constraints cannot be violated and optimality is always desirable
but it may conflict with the previous requirement…
KNOWLEDGE REPRESENTATION & REASONING - SAT
62
Dynamic SAT (DSAT)
A DSAT is a sequence of SAT problems each resulting from the previous one through changes in its definition
These changes may affect any SAT component Addition or removal of variables Addition or removal of clauses Changes in existing clauses
addition or removal of literals
How can we solve such problems efficiently: Best answer: Local Search!
KNOWLEDGE REPRESENTATION & REASONING - SAT
63
DSAT Assignment
In this assignment you will implement and experimentally evaluate local search algorithms for DSAT
The experimental evaluation will be carried out on randomly generated k-SAT problems
you will also run a complete solver on the generated instances dynamic changes will be implemented through the sequential addition of
clauses
You will work in teams of 3 all 3 members of a team will be individually examined during the
presentation of your work the presentation will be sometime in January/February (tbc later) there will a “progress so far” discussion on December 8/9
KNOWLEDGE REPRESENTATION & REASONING - SAT
64
DSAT Assignment
The assignment consists of the following steps:1. Build a generator for random k-SAT instances (k3)
This part of the assignment is common to both teams. You must use the same generator so that the algorithms can be fairly compared
The generator will take as input 3 parameters: <k,n,l,m>, where n is the number of variables, l the number of clauses contained in the initial problem, and m is the number of clauses dynamically added at each step m can be a percentage
Each clause will be “filled” with k variables chosen uniformly at random. After a variable is chosen it will be negated with probability 0.5. You must make sure that no variable appears in a clause more than
once (in any polarity) You must make sure that no clause appears more than once in the set of
l clauses you don’t need to check for subsumption between the generated clauses
KNOWLEDGE REPRESENTATION & REASONING - SAT
65
DSAT Assignment
The assignment consists of the following steps:2. Once a problem is generated you will have to check if it is satisfiable.
This can be by running a complete solver (minisat, tinisat, zchaff, …) if it is not satisfiable then you throw it away and create a new one
NOTE: by setting the parameters n and l properly you can ensure that almost all generated instances are satisfiable Experimentation needed to find the proper values of the parameters ATTENTION: we don’t want very easy problems! But they don’t have to
be extremely hard If the problem is satisfiable you need to identify and store the first solution
found You may need to make minimal changes in the solver’s code if the
solver does not return the solution by default
KNOWLEDGE REPRESENTATION & REASONING - SAT
66
DSAT Assignment
The assignment consists of the following steps:3. Once a satisfiable problem has been generated and a solution has been found,
you can dynamically alter it by adding m clauses at random One option you can use to create many instances starting from the same initial
problem, is to repeat the above process many times In this way you will get many DSAT instances that are all derived from the
same basic problem by adding m clauses in one step Another option is to continue adding m clauses for some fixed number of x steps.
In this way you will get a sequence 1,2,…,x of DSAT instances where instance i is derived from instance i-1 by adding m clauses
The whole process can be repeated many times (i.e. for many initial SAT randomly created SAT problems).
NOTE: If you use the same random seed then the problems created will be the same you should store the seeds used so that the two teams can compare their
algorithms on the same instances
KNOWLEDGE REPRESENTATION & REASONING - SAT
67
DSAT Assignment
The assignment consists of the following steps:4. each time a DSAT instance is created, run a local search algorithm using the
previously found solution as starting point The algorithm runs until a new solution is found or a termination condition
becomes true (i.e. it reaches the step limit) When it terminates you should store the last assignment found whether it is a
solution or not! NOTE: If you are using the first option for DSAT generation then the previous
problem will always be the initial one and therefore you will have a solution for that problem But if you are using the second option then the local search algorithm may not
have found a solution for the previous instance in the sequence. In this case what is the initial assignment used by the local search algorithm? HINT: you can start from the assignment reached by the previous run ignoring the
fact that it was not an actual solution
KNOWLEDGE REPRESENTATION & REASONING - SAT
68
DSAT Assignment
Which local search algorithms will you implement? Each team will implement one “simple” and one “complex” local search method
“simple” methods are WSAT and its variants (Novelty, etc.) “complex” methods are dynamic local search (GLS) and GRASP Simulated Annealing and Genetic Algorithms are not considered. Why?
How will you measure the performance of the algorithms?1. You need to report how many instances are solved, and for each instance that is not
solved, the number of conflicting clauses (this is 0 for a solved instance)2. For each instance (solved or not) you need to report the distance of the assignment
returned from the initial solution distance is measured as the number of different variable assignments in the case of a sequence of DSATs, the average distance between consecutive problems
What about the various parameters of the algorithms? You need to read the corresponding papers carefully to find the proposed values of the
parameters. However, tuning a local search method is largely an experimental process. You will find the
“optimal” settings through experiments.
KNOWLEDGE REPRESENTATION & REASONING - SAT
69
DSAT Assignment
As important issue in DSAT is finding solutions “close” to the previous ones Local search algorithms for SAT are not concerned with this and thus ignore it
Question: How will deal with this? will you have to modify the evaluation function in some way?
Answer: You don’t have to deal with it! It is a research problem that has not been addressed yet If one of you chooses to do his Master’s thesis on SAT then this is an interesting problem
to tackle
On December 8/9 I expect to hear the two teams’ proposals on how to implement the chosen local search methods
KNOWLEDGE REPRESENTATION & REASONING - SAT
70
Conclusions
SAT is a very useful problem class Theoretical importanceTheoretical importance
prototypical NP-complete problem NP-completeness is usually proved by reduction to SAT
Practical valuePractical value can be used to represent and reason with knowledge in many domains
Efficient AlgorithmsEfficient Algorithms complete DPLL-based solvers and incomplete local search
procedures have been very successful in solving large hard real problems