Upload
claral
View
85
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Approximate Counting. Subhasree Basu Rajiv Ratn Shah Vu Vinh An Kiran Yedugundla School of Computing National University of Singapore. Agenda. Background The Class #P Randomized Approximation Schemes The DNF Counting Problem Approximating the Permanent Summary. Agenda. - PowerPoint PPT Presentation
Citation preview
School of Computing
Approximate Counting
Subhasree Basu Rajiv Ratn Shah
Vu Vinh AnKiran Yedugundla
School of ComputingNational University of Singapore
1
School of Computing2
Agenda• Background• The Class #P• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent• Summary
School of Computing3
Agenda• Background• The Class #P• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent• Summary
School of Computing4
Solving and Counting
• Suppose a problem ∏ • We check if I is an YES-instance of ∏ • That usually involves finding the solution to ∏ and checking if
I matches it
School of Computing5
ExampleIf we want to find all possible permutations of 3 different items
School of Computing6
ExampleIf we want to find all possible permutations of 3 different items
School of Computing7
Solving and Counting
• When we are trying to count, we actually try to find the number of solutions of ∏ where I is a solution
• Following the same example, in the counting version we want to know how many such permutations exist
School of Computing8
Approximate Counting
• Approximate Counting – involves two terms Approximate and Counting
• Counting is getting the number of solutions to a problem• Approximate because we do not have an exact counting
formula for a vast class of problems
Approximately counting Hamilton paths and cycles in dense graphsMartin Dyer, Alan Frieze, and Mark Jerrum, SIAM Journal on Computing 27 (1998), 1262-1272.
School of Computing9
Agenda• Background• The Class #P• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent• Summary
School of Computing10
The Class #P
• #P is the class of counting problems associated with the NP decision problems
• Formally a problem ∏ belongs to the #P if there is a non-deterministic polynomial time Turing Machine that, for any instance I, has a number of accepting computations that is exactly equal to the number of distinct solutions to instance I
• ∏ is #P-complete if for any problem ∏’ in #P, ∏’ can be reduced to ∏ by a polynomial time Turing machine.
School of Computing11
Need for Randomization
• #P-complete problems can only be solvable in polynomial time only if P=NP
• Hence the need for approximate solutions• Randomization is one such technique to find approximate
answers to counting problems
School of Computing12
Various Applications for Approximate Counting
• DNF counting problem. • Network reliability. • Counting the number of Knapsack solutions. • Approximating the Permanent. • Estimating the volume of a convex body.
School of Computing13
Various Applications for Approximate Counting
• DNF counting problem. • Network reliability. • Counting the number of Knapsack solutions. • Approximating the Permanent. • Estimating the volume of a convex body.
School of Computing14
Agenda• Background• The Class #P• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent• Summary
School of Computing15
Polynomial Approximation Scheme
• Let #(I) be the number of distinct solutions for instance I of problem ∏
• Let the Approximation algorithm be called A• It takes an input I and outputs and integer A(I)• A(I) is supposed to be close to #(I)
School of Computing16
Polynomial Approximation Scheme
DEF 1: A Polynomial Approximation scheme(PAS) for a counting problem is a deterministic algorithm A that takes and input instance I and a real number ε > 0, and in time polynomial in n = |I| produces an output A(I) such that
(1- ε) #(I) < A(I) < (1 + ε) #(I)
School of Computing17
Polynomial Approximation Scheme
DEF 1: A polynomial Approximation scheme(PAS) for a counting problem is a deterministic algorithm A that takes and input instance I and a real number ε > 0, and in time polynomial in n = |I| produces an ouput A(I) such that
(1- ε) #(I) < A(I) < (1 + ε) #(I)
DEF 2 :A Fully Polynomial Approximation Scheme (FPAS) is a polynomial approximation scheme whose running time is polynomially bounded in both n and 1/ε.
The output A(I) is called an ε-approximation to #(I)
School of Computing18
Polynomial Randomized Approximation Algorithm
DEF 3: A Polynomial Randomized Approximation scheme(PRAS) for a counting problem ∏ is a randomized algorithm A that takes an input instance I and a real number ε>0, and in time polynomial in n = |I| produces an output A(I) such that
Pr[(1- ε)#(I) ≤ A(I) ≤ (1 + ε)#(I)] ≥ ¾
School of Computing19
Polynomial Randomized Approximation Algorithm
DEF 3: A Polynomial Randomized Approximation scheme(PRAS) for a counting problem ∏ is a randomized algorithm A that takes an input instance I and a real number ε>0, and in time polynomial in n = |I| produces an output A(I) such that
Pr[(1- ε)#(I) ≤ A(I) ≤ (1 + ε)#(I)] ≥ 3/4
DEF 4: A Fully Polynomial Randomized Approximation Scheme (FPRAS) is a polynomial randomized approximation scheme whose running time is polynomially bounded in both n and 1/ε.
School of Computing20
An (ε, δ) FPRAS
DEF 5: An (ε, δ)-FPRAS for a counting problem ∏ is a fully polynomial randomized approximation scheme that takes an input instance I and computes an ε-approximation to #(I) with probability at least 1- δ in time polynomial in n , 1/ ε and log(1/δ)
School of Computing21
Agenda• Background• The Class #P• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent• Summary
School of Computing22
Monte Carlo Method• It is a wide-range of algorithms under one name• In essence it includes all the techniques that use random
numbers to simulate a problem• It owes its name to a casino in the principality of Monte Carlo.• Events in a casino depend heavily on chance of a random
event. e.g., ball falling into a particular slot on a roulette wheel, being dealt useful cards from a randomly shuffled deck, or the dice falling the right way
School of Computing23
DNF Counting - Terminologies
• F(X1, X2,… Xn) is Boolean formula in DNF– X1, X2,… Xn are Boolean variables.
– F = C1 C∨ 2… C∨ m is a disjunction of clauses
– Ci = L1 L∧ 2… L∧ ri is a conjunction of Literals
– Li is either variable Xk or Xk’
e.g. F = (x1 x∧ 3’) (x∨ 1 x∧ 2’ x∧ 3) (x∨ 2 x∧ 3)
School of Computing24
Terminologies contd …
• a = (a1, a2, … an ) is truth assignment
• a satisfy F if F(a1, a2, … an ) evaluates to 1 or TRUE• #F is the number of distinct satisfying assignment of F
Clearly we have here 0 < #F ≤ 2n
School of Computing25
DNF Counting Problem
• The Problem at hand is now to compute the value of #F• It is known to be #P complete (We can reduce #SAT to #P complete)
• We will describe an (ε, δ)- FPRAS algorithm for this • The input size for this is at most nm• We have to design a approximation scheme that has a running
time polynomial in n, m , 1/ ε and log(1/ δ)
School of Computing26
Some more terminologies
• U is a finite set of known size• f: U {0,1} be a Boolean function over U• Define G = {u є U | f(u) =1} as the pre-image of U
Assume: we can sample uniformly at random from U
We now want to find the size of G i.e., |G|
School of Computing27
Formulation of Our Problem
• Let in our formulation U = {0,1}n , the set of all 2n truth assignments
• Let f(a) = F(a) for each of aєU • Hence our G is now the set of all satisfying truth assignments
for F
Our problem thus reduces to finding the size of G
School of Computing28
Monte Carlo method
• Choose N independent samples from U, say, u1, u2, ……. ,uN
• Use the value of f on these samples to estimate the probability that a random choice will lie in G
• Define the random variable Yi = 1 if f(ui) = 1 , Yi = 0 otherwise
So Yi is 1 if and only if uiєG• The estimator random variable is
Z = |U| We claim that with high probability Z will be an approximation to
|G|
N
i N1
Yi
School of Computing29
An Unsuccessful Attempt (cont.)• Estimator Theorem: Let ρ = , Then Monte Carlo method
yields an -approximation to |G| with probability at least 1 – δ provided N ≥
• Why unsuccessful?– We don’t know the value of ρ– We can solve this by using a successively refined lower bound
on to determine the number of samples to be chosen– It has running time of at least N. where N ≥
School of Computing30
Agenda• Background• Randomized Approximation Schemes• The DNF Counting Problem
– DNF Counting– Compute #F– An Unsuccessful Attempt– The Coverage Algorithm
• Approximating the Permanent• Summary
School of Computing31
DNF Counting• Let F = C1 C∨ 2 C∨ 3 C∨ 4 = (x1 x∧ 3’) (x∨ 1 x∧ 2’ x∧ 3) (x∨ 2 x∧ 3) (x∨ 3’) • G be set of all satisfying truth assignment• Hi be set of all satisfying truth assignment for Ci
• H1 = {a5, a7}• H2 = {a6}• H3 = {a4, a8}• H4 = {a1, a3, a5, a7}
• H = H1 H2 H3 H4
= {a1, a3, a4, a5, a6, a7, a8}
• It is easy to see that |Hi | = 2 n-ri
H4
H2
H1 H3
x1 x2 x3
a1 0 0 0a2 0 0 1a3 0 1 0a4 0 1 1a5 1 0 0a6 1 0 1a7 1 1 0a8 1 1 1
School of Computing32
DNF Counting
|G| = #F
a1
a2
.
.ai
aj
..
.
.
a2n
0 ,
1f(a) = F(a) f: V→ {0, 1}
V
School of Computing33
The Coverage Algorithm• Importance Sampling
– Want to reduce the size of the sample space– Ratio ρ is relatively large– Ensuring that the set G is still completely represented.
– Reformulate the DNF counting problem in a more abstract framework, called the union of sets problem.
School of Computing34
union of sets problem• union of sets problem
Let V be a finite Universe. We are given m subsets H1, H2, …, Hm V, ⊆such that following assumptions are valid:
1. For all i, |Hi| is computable in polynomial time.2. It is possible to sample uniformly at random from any Hi
3. For all v V, it can be determined in polynomial time whether v H∈ ∈ i
– The Goal is to estimate the size of the Union H = H1 H∪ 2 … H∪ ∪ m
– The brute-force approach to compute |H| is inefficient when the universe and the set Hi are of large cardinality
– The assumption 1-3 turn out to be sufficient to enable the design of Importance sampling algorithm
School of Computing35
DNF Counting
Hi
.
.
.
.
.
.
.
.
0,
1F[v] → {0, 1}
H1
H2
Hm
V
School of Computing36
The Coverage Algorithm• DNF Counting is special case of union of sets
– F(X1, X2,… Xn) is Boolean formula in DNF– The universe V corresponds to the space of all 2n truth assignment – Set Hi contains all the truth assignment that satisfy the clause Ci
– Easy to sample from Hi by assigning appropriate values to variables appearing in Ciand choosing rest at random
– Easy to see that |Hi | = 2 n-ri
– Verify in linear timethat some v V is a member of H∈ i
– Then the union of sets Hi gives the set of satisfying assignments for F
School of Computing37
The Coverage Algorithm• Solution to union of sets problem.
– Define a multiset U = H1 ⊎ H2 … ⊎ Hm
– Multiset union contains as many copies of v V as the number of ∈Hi`sthat contain that v.
– U = {(v,i) | (v,i) H∈ i}– Observe that |U| = ≥ |H|– For all v H, ∈ cov(v) = {(v,i) | (v,i) U} ∈– In DNF problem, for a truth assignment a, the set cov(a) is the set of
clauses satisfied by av1 v2 v3 v4 … … vs
1 0 1 0 … … 0
1 1 0 0 … … 1
… … .. … … … … …
0 0 1 1 … … 1
School of Computing38
The Coverage Algorithm• The following observations are immediate
– The number of coverage set is exactly |H|– U = COV(v)– |U| = – For all v H, |COV(v)| ≤ m ∈
– Define function f((v, i)) = {1 if i = min{ j |v j}; 0 otherwise}– Define set G = {(v, i) U | f((v, i)) = 1}– |G| = |H|
v1 v2 v3 … vs
H1H2H3
.
.
.
Hm
* 0 * … 0x * 0 … 00 0 x … * . . . . .. . . . .. . . . . x 0 0 … x
School of Computing39
The Coverage Algorithm• Algorithm
– for j = 1 to N do• pick (v, i) uniformly at random from U• Set Yj =1 if f((v, i)) = 1 ; else 0 // if((v, i)) = 1, call it special pair
– Output().|U| = Y .|U|
– E[Yj] = = = = – So the algorithm is an unbiased estimator for as desired
R.M. Karp and M. Luby, “Monte-carlo algorithms for enumeration and reliability problems,” In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, 1983, pp. 56–64.
School of Computing40
Analysis• The Value of N:
– E[Yj] = ≥ = – By Estimator Theorem N = = # trials
• Complexity– Computing |U| requires O(nm)– Checking (v, i) is special requires O(nm)– Generating a random pair requires O(n+m)– Total running time per trial = O(nm)– Total running time = O()
Karp, R., Luby, M., Madras, N., “Monte-Carlo Approximation Algorithms for Enumeration Problems", J. of Algorithms, Vol. 10, No. 3, Sept. 1989, pp. 429-448. improved the running time to O()
v1 v2 v3 … vs
H1H2H3
.
.
.
Hm
* * … x * … x … * . . . . .. . . . .. . . . . x … x
School of Computing41
(, δ)-FPRAS for DNF Counting• Our Goal is to estimate the size of G U such ⊆
that G = f-1(1)– Apply Estimator Theorem based on naïve Monte Carlo
sampling technique
We claim that the naïve Monte Carlo sampling algorithm gives an (, δ)-FPRAS for estimating the size of G.Lemma: In the union of sets problem ρ = Proof: relies on the observations made above
• |U| = ≤ ≤ m|H| = m|G|
School of Computing42
The Coverage Algorithm• The Monte Carlo Sampling technique gives as (,
δ)-FPRAS for |G|, hence also for |H|
Estimator Theorem:The Monte Carlo method yields an -approximation to |G| with probability at least 1 - δ provided N ≥ The running time is polynomial in N
Proof: We need to show:(1) fcan be calculated in polynomial time (2) It is possible to sample uniformly from U.
School of Computing43
The Coverage AlgorithmProof(1): Compute f((v,i)) in O(mn) by checking whether truth assignment v satisfies Ci but none of the clauses Cj for j <i.
Proof(2): It is possible to sample uniformly from U.Sampling an element (v,i) uniformly from U:I. choose is.t. 1 ≤ i ≤ m and Pr[i] = = = II. Set the Variables in Hi so that Hi is satisfiedIII. choose a truth assignment for the remaining variables
uniformly at randomIV. output iand the resulting truth assignment v
School of Computing44
Agenda• Background• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent
– Number of perfect matchings in bipartite graph– Near-uniform generation– The canonical path argument
• Summary
School of Computing45
Matrix Permanent• The permanent of a square matrix in linear algebra is a function of the
matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix.
• Permanent formula:Let be an matrix, the permanent of the matrix is defined as:
Where • Example:
• Best running time:
A Z Broder. 1986. How hard is it to marry at random? (On the approximation of the permanent). In Proceedings of the eighteenth annual ACM symposium on Theory of computing (STOC '86). ACM, New York, NY, USA, 50-58. DOI=10.1145/12130.12136
School of Computing46
Agenda• Background• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent
– Number of perfect matchings in bipartite graph– Near-uniform generation– The canonical path argument
• Summary
School of Computing
Bipartite Graph• Definition: a bipartite graph is a set of graph vertices decomposed into
two disjoint sets such that no two graph vertices within the same set are adjacent.
• Notations: G(U, V, E) where:
– are disjoint sets of vertices in which n is number of vertices in each set
– E is set of graph edges
• Bipartite graph G could be represented in 0-1 matrix A(G) as following:
(1 11 0)
U1
U2 V2
V1
U1
U2
V1 V2
47
School of Computing
Graph Perfect Matchings
• A matching is a collection of edges such that vertex occurs at most once in M
• A perfect matching is a matching of size n
48
U1
U2 V2
V1 U1
U2 V2
V1
Rajeev Motwani and Prabhakar Raghavan. 1995. Randomized Algorithms. Cambridge University Press, New York, NY, USA. Chapter 11.
School of Computing
Graph Perfect Matchings
• Let denote the number of perfect matchings in the bipartite graph G: • Computing the number of perfect matchings in a given bipartite graph is
#P-complete computing permanent of 0-1 matrix is also #P-complete• Propose a for counting number of perfect matchings problem
49
(1 11 0)
U1
U2 V2
V1U1
U2
V1 V2
𝑝𝑒𝑟 (𝐴 (𝐺 ) )=(1×0 )+ (1×1 )=1¿ (𝐺)=1
School of Computing
Introduction to Monte Carlo Method• A method which solves a problem by generating
suitable random numbers and observing that fraction of the numbers obeying some properties
• Monte Carlo methods tend to follow a particular pattern:
– Define a domain of possible inputs.– Generate inputs randomly from a
probability distribution over the domain.
– Perform a deterministic computation on the inputs.
– Aggregate the results.
50
School of Computing51
Estimator Theorem• Estimator Theorem: Let ρ = , Then Monte Carlo method
yields an -approximation to |G| with probability at least 1 – δ provided
where N ≥
School of Computing52
Agenda• Background• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent
– Number of perfect matchings in bipartite graph– Near-uniform generation– The canonical path argument
• Summary
School of Computing
Uniform Generation• Let denote the set of distinct matchings of size k in G and
• A uniform generator for is a randomized polynomial time algorithm that:– Takes G as input– Returns a matching such that m is uniformly distributed
over
53
School of Computing
Near-uniform generation for • Definition: Given a sample space omega, a generator
is said to be a near-uniform generator for with error if, for all :
• A near-uniform generation for is the one in which the sample space is
54
School of Computing
Near-uniform generation for
• Rationale: – Use and Monte Carlo method to estimate ratios (estimator
theorem)
–
– For bipartite graph G, try to find with error and running time polynomially bounded in n
55
School of Computing
Design • Devise Markov chain , each state is one element of
• Simulate by executing a random walk on the underlying graph of . Each vertex in the graph is one element of
• After steps ( is not too large), will approach its stationary distribution. The stationary probability of each state is equal to
56
School of Computing
Structure of • Underlying graph:
– Nodes: element of
– Edges: let E denote the set of edges in G. In any state m of with probability ½, remain at the state and do nothing. Otherwise, choose an edge e=(u,v):• Reduce: If and , move to state m’=m-e
• Augment: If , with u and v unmatched in m, move to m’=m+e
• Rotate: If , with u matched to w and v unmatched, move to m’=(m+e)-f where f=(u,w)
• Idle: otherwise, stay at current place
Reduce
Augment
Rotate
57
School of Computing58
for
School of Computing
Properties of • Each transition has an associated probability
of • If a transition exists from m to m’ then the
reverse transition also exists and has the same probability
59
Approximating the Permanent Mark Jerrum and Alistair Sinclair, SIAM Journal on Computing 18 (1989), 1149-1178..
School of Computing
Transition Probability Matrix• the stationary distribution of • : the state of after t steps of simulation with
is the beginning state• : the probability of transition i j then:
• Conditional probability that given that • : transition probability matrix
60
School of Computing
Prove • Proof sketch:
– Prove the stationary distribution of is a uniform distribution after finite steps
– The approximated stationary distribution does not depend on the starting state when number of simulation steps is polynomially bounded in n and the error is less than
61
School of Computing
Uniform stationary distribution of • The underlying graph of :
– Irreducible: we can go from any matching in a graph to any other matching
– Aperiodic: self-loop probabilities are positive is ergodicAny finite, ergodic Markov Chain converges to a unique
stationary distribution after finite steps
• Transition probability matrix P:– Symmetric, doubly stochastic: each of rows and columns sum
to 1The stationary distribution of must be the uniform distribution on its states
62
School of Computing
Approximated stationary distribution
• Relative point-wise distance:
• Need to prove that () reduces below for polynomial in n
63
School of Computing
Proof • are eigenvalues of P• mber of states in • By theorem 6.21:
• By Proposition B.3:
• Choose then when :
64
School of Computing
Proof is polynomially bounded in n• the stationary probability of transition i j
• Let S be a subset of the set of states of such that: and is its complement:– Capacity of S: – Ergodic flow out of S: – Conductance of S:
65
School of Computing
Proof is polynomially bounded in n• From theorems 6.16, 6.17 and 6.19:
• By Canonical path argument: is polynomially upper-bounded in n is also polynomially upper-bounded in n
66
School of Computing67
Agenda• Background• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent
– Approximating the number of perfect matchings in bipartite graph
– The canonical path argument• Summary
School of Computing68
Conductance• Let S be a subset of the set of states of such that: and is its
complement:– Capacity of S: – Ergodic flow out of S: – Conductance of S:
– Conductance of Markov chain with state space , is defined as
S
69 School of Computing
Canonical pathFor every pair of states define a path
where the probability of going from to is nonzero.
70 School of Computing
Canonical pathSymmetric difference XY • This consists of a disjoint collection of paths in H (some of
which may be closed cycles), each of which has edges that belong alternately to X and to Y.
• Fix some arbitrary ordering on all simple paths in H, and designated in each of them a so-called “start vertex”, which is arbitrary if the path is a closed cycle but must be an endpoint otherwise.
• This ordering induces a unique ordering P1, P2, . . . , Pm on the paths appearing in XY.
• The canonical path from X to Y involves “unwinding” each of the Pi
71 School of Computing
Canonical path• There are two cases to consider:
– Pi is not a cycle• If Pi starts with a X-edge, remove it (↓-transition -reduce)• Perform a sequence of ↔-transitions (removes X-edge, inserts Y-edge - Rotate)• If Pi’s length is odd (we remain with one Y-edge), insert it (↑-transition - Augment)
– Pi is a cycle• Let Pi=(v0,v1,. . .,v2l+1)
– V0 is the start vertex, ,
• Remove (v0,v1) – an X-edge (↓-transition-reduce)• We are left with a path with endpoints v0, v1, one of which must be the start vertex
• Continue as above, but:– If v0 is the start vertex, use v1 as the start vertex– If v1 is the start vertex, use v0 as the start vertex– (This trick serves to distinguish paths from cycles)
2 2 1,j jv v X 2 1 2 2,j jv v Y
72 School of Computing
Unwinding a path X
XY
X
X X
Y
↔-transition↓-transition
73 School of Computing
Unwinding a cycle
X
X
XX X X
X
X
X
Y
X
Y
Y
Y
YY
Y
YY
Y
Y
YY
YX
X
XX
↓-transition ↔-transition
↔-transition ↑-transition
74 School of Computing
Example
X:
Y:
XY:
School of Computing
Canonical path ArgumentTheorem: For the Markov chain ,
Conductance Proof: Let H be the graph underlying The transition probabilities along all the oriented edges of H are exactly where E is the set of edges in G.We can show that, for any subset S of the vertices of H with capacity , the number of edges between and is large.
75
School of Computing
Canonical path ArgumentA Canonical path between every pair of vertices of H, such that no oriented edge of H occurs in more than bN of these paths.For a subset S of the vertices of H, the number of such canonical paths crossing the cut from S to is we assume
76
School of Computing
Canonical path ArgumentSince at most bN edges pass through each of the edges between S and , the number of such edges must be at least |S|/2b, so that the conductance of Cn is at least
Now we have to prove b = 3 to get the desired lower bound for conductance
77
School of Computing
Canonical path ArgumentLet m be any matching in Mn Define k(m) – set of all nodes such that Lemma: For any
The only perfect matching that chooses m as its partner is m itself. It requires n Reduce operations to reach any near near perfect matching adjacent to m. The number of near perfect matchings at distance exactly 2 from m is at most n(n-1). Thus, there are at most n+n(n-1) different near perfect matchings within distance of two of m.
78
School of Computing
Canonical path Argument• Associate a unique node with every node • Choose a canonical path between and , If is in , then we set
=• Canonical paths between nodes , consists of three
consecutive segments, s to , to and to t .• Two types of Segments in paths:
Type A : paths between a node and its partners
Type B : paths between pairs of nodes in
79
School of Computing80
for
School of Computing81
Canonical path argumentLemma: Any transition T in H lies on at most 3N distinct canonical paths.Fix a transaction T=(u,v) in HTotal number of type A segments canonical paths that T has is bounded by (k(v)+k(u))N < 2NType B segments is atmost = N = Total paths with T is bounded by 3 N
School of Computing82
Summary• Background• Randomized Approximation Schemes• The DNF Counting Problem• Approximating the Permanent
– Approximating the number of perfect matchings in bipartite graph
– The canonical path argument