View
218
Download
1
Category
Preview:
Citation preview
Graph Sparsifiers
Nick Harvey University of British Columbia
Based on joint work with Isaac Fung,and independent work of Ramesh Hariharan & Debmalya Panigrahi
Approximating Dense Objectsby Sparse Objects
• Floor joists
Wood Joists Engineered Joists
Approximating Dense Objectsby Sparse Objects
• Bridges
Masonry Arch Truss Arch
Approximating Dense Objectsby Sparse Objects
• Bones
Human Femur Robin Bone
Approximating Dense Objectsby Sparse Objects
• Graphs
Dense Graph Sparse Graph
• Input: Undirected graph G=(V,E), weights u : E ! R+
• Output: A subgraph H=(V,F) of G with weightsw : F ! R+ such that |F| is small and u(±G(U)) = (1 § ²) w(±H(U)) 8U µ V
weight of edges between U and V\U in G weight of edges between U and V\U in H
Graph SparsifiersWeighted subgraphs that approximately preserve graph structure
Cut Sparsifiers (Karger ‘94)
G H
• Input: Undirected graph G=(V,E), weights u : E ! R+
• Output: A subgraph H=(V,F) of G with weightsw : F ! R+ such that |F| is small and xT LG x = (1 § ²) xT LH x 8x 2 RV
Spectral SparsifiersWeighted subgraphs that approximately preserve graph structure
(Spielman-Teng ‘04)
Laplacian matrix of G Laplacian matrix of H
G H
Motivation:Faster Algorithms
Dense Input graph G Exact/Approx Output
Algorithm A for some problem P
Sparse graph HApproximately
preserves solution of P
Algorithm Aruns faster on sparse input
Approximate Output
(Fast) Sparsification
Algorithm
Min s-t cut, Sparsest cut,Max cut, …
State of the artSparsifier Authors # edges Running TimeCut Benczur-Karger ’96 O(n log n / ²2) O(m log3 n)Spectral Spielman-Teng ’04 O(n logc n / ²2) O(m logc n / ²2)Spectral Spielman-Srivastava ’08 O(n log n / ²2) O(m logc n / ²2)Spectral Batson et al. ’09 O(n / ²2) O(mn3)
Cut This paper O(n log n / ²2) O(m + n log5 n / ²2) *Spectral Levin-Koutis-Peng ’12 O(n log n / ²2) O(m log2 n / ²2)
n = # verticesm = # edges
*: The best algorithm in our paper is due to Panigrahi.
~
c = large constant
Random Sampling
• Can’t sample edges with same probability!• Idea: [Benczur-Karger ’96]
Sample low-connectivity edges with high probability, and high-connectivity edges with low probability
Keep this
Eliminate most of these
Generic algorithm• Input: Graph G=(V,E), weights u : E ! R+
• Output: A subgraph H=(V,F) with weights w : F ! R+
Choose ½ (= #sampling iterations)Choose probabilities { pe : e2E }For i=1 to ½
For each edge e2EWith probability pe
Add e to F Increase we by ue/(½pe)• E[|F|] · ½ ¢ e pe
• E[ we ] = ue 8e2E
) For every UµV, E[ w(±H(U)) ] = u(±G(U))
[Benczur-Karger ‘96]
Goal 1: E[|F|] = O(n log n / ²2)
How should we choosethese parameters?
Goal 2: w(±H(U)) is highly concentrated
Benczur-Karger Algorithm• Input: Graph G=(V,E), weights u : E ! R+
• Output: A subgraph H=(V,F) with weights w : F ! R+
Choose ½ = O(log n / ²2)Let pe = 1/“strength” of edge eFor i=1 to ½
For each edge e2EWith probability pe
Add e to F Increase we by ue/(½pe)
• Fact 1: E[|F|] = O(n log n / ²2)
• Fact 2: w(±H(U)) is very highly concentrated
) For every UµV, w(±H(U)) = (1 § ²) u(±G(U))
“strength” is a slightly unusual quantity, but
Fact 3: Can estimate all edge strengths inO(m log3 n) time
“strength” is a slightly unusual quantity
Question: [BK ‘02]Can we use connectivity instead of strength?
Our Algorithm• Input: Graph G=(V,E), weights u : E ! R+
• Output: A subgraph H=(V,F) with weights w : F ! R+
Choose ½ = O(log2 n / ²2)Let pe = 1/“connectivity” of eFor i=1 to ½
For each edge e2EWith probability pe
Add e to F Increase we by
ue/(½pe)• Fact 1: E[|F|] = O(n log2 n / ²2)
• Fact 2: w(±H(U)) is very highly concentrated
) For every UµV, w(±H(U)) = (1 § ²) u(±G(U))
• Extra trick: Can shrink |F| to O(n log n / ²2) by using Benczur-Karger to sparsify our sparsifier!
Motivation for our algorithm
Connectivities are simpler and more natural) Faster to compute
Fact: Can estimate all edge connectivitiesin O(m + n log n) time [Ibaraki-Nagamochi ’92]
) Useful in other scenarios
Our sampling method has been used to compute sparsifiers in the streaming model [Ahn-Guha-McGregor ’12]
Overview of Analysis
Most cuts hit a huge number of edges) extremely concentrated
) whp, most cuts are close to their mean
Most Cuts are Big & Easy!
Overview of Analysis
High connectivityLow sampling
probability
Low connectivityHigh sampling
probability
Hits many red edges) reasonably concentrated
Hits only one red edge) poorly concentrated
The same cut also hits many green edges) highly concentrated
This masks the poor concentration above
There are few small cuts [Karger ’94],so probably all are concentrated.
Key Question: Are there few such cuts?Key Lemma: Yes!
• Notation: kuv = min size of a cut separating u and v• Main ideas:– Partition edges into connectivity classes
E = E1 [ E2 [ ... Elog n where Ei = { e : 2i-1·ke<2i }– Prove weight of sampled edges that each cut
takes from each connectivity class is about right
– This yields a sparsifier
U
Prove weight of sampled edges that each cuttakes from each connectivity class is about right
• Notation:• C = ±(U) is a cut • Ci := ±(U) Å Ei is a cut-induced set
• Chernoff bounds can analyze each cut-induced set, but…
• Key Question: Are there few small cut-induced sets?
C1 C2 C3 C4
Counting Small Cuts• Lemma: [Karger ’93]
Let G=(V,E) be a graph.Let K be the edge-connectivity of G. (i.e., global min cut value)
Then, for every ®¸1,|{ ±(U) : |±(U)|·®K }| < n2®.
• Example: Let G = n-cycle.Edge connectivity is K=2.
Number of cuts of size c = £( nc ).) |{ ±(U) : |±(U)|·®K }| · O(n2®).
Counting Small Cut-Induced Sets• Our Lemma: Let G=(V,E) be a graph. Fix any BµE.
Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)
Then, for every ®¸1,|{ ±(U) Å B : |±(U)|·®K }| < n2®.
• Karger’s Lemma: the special case B=E and K=min cut.
When is Karger’s Lemma Weak?• Lemma: [Karger ’93]
Let G=(V,E) be a graph.Let K be the edge-connectivity of G. (i.e., global min cut value)
Then, for every c¸K,|{ ±(U) : |±(U)|·c }| < n2c/K.
• Example: Let G = n-cycle.Edge connectivity is K=2|{ cuts of size c }| < nc
²
K = ²
< n2c/²
Our Lemma Still Works• Our Lemma: Let G=(V,E) be a graph. Fix any BµE.
Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)
Then, for every ®¸1,|{ ±(U) Å B : |±(U)|·®K }| < n2®.
• Example: Let G = n-cycle.Let B = cycle edges.We can take K=2.So |{ ±(U) Å B : |±(U)|·®K }| < n2®.
|{cut-induced subsets of B induced by cuts of size · c}| · nc
²
Algorithm for Finding a Min Cut [Karger ’93]
• Input: A graph• Output: A minimum cut (maybe)• While graph has 2 vertices–Pick an edge at random–Contract it
• End While• Output remaining edges
• Claim: For any min cut, this algorithm outputs it with probability ¸ 1/n2.
• Corollary: There are · n2 min cuts.
Finding a Small Cut-Induced Set• Input: A graph G=(V,E), and BµE• Output: A cut-induced subset of B• While graph has 2 vertices– If some vertex v has no incident edges in B• Split-off all edges at v and delete v
–Pick an edge at random–Contract it
• End While• Output remaining edges in B• Claim: For any min cut-induced subset of B, this
algorithm outputs it with probability >1/n2.• Corollary: There are <n2 min cut-induced subsets of B
Splitting OffReplace edges {u,v} and {u’,v} with {u,u’}while preserving edge-connectivitybetween all vertices other than v Wolfgang Mader
vu u’
vu u’
Conclusions
Questions
• Sampling according to connectivities gives a sparsifier• We generalize Karger’s cut counting lemma
• Improve O(log2 n) to O(log n) in sampling analysis• Applications of our cut-counting lemma?
Recommended