26
1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1

Reduction between Transitive Closure & Boolean Matrix Multiplication

Presented by Rotem Mairon

Page 2: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2

Overview

A divide & conquer approach for matrix multiplication: Strassen’s method

Reduction between TC and BMM

The speed-up of 4-Russians for matrix multiplication

Page 3: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

33

The speedup of 4-Russians for matrix multiplication

• The boolean multiplication of row Ai by column Bj is defined by:

• Consider a two boolean matrices, A and B of small dimensions.

ijiknk

ij baC

1

• How can this be improved with pre-processing?

0110

0100

0001

1111

1001

1011

0111

0100

n=4

BA

1

A basic observation

• The naïve boolean multiplication is done bit-after bit. This requires O(n) steps.

Page 4: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

44

1

0000

0100

0

0

0

0

0

1

1

0

• For each entry in the table, we pre-store the value for multiplying the indices.

• Each row Ai and column Bj form a pair of 4-bit binary numbers.

• The multiplication of Ai by Bi can be computed in O(1) time.

The speedup of 4-Russians for matrix multiplication

0110

0100

0001

1111

1001

1011

0111

0100

n=4

BA

1

• These binary numbers can be regarded as indices to a table of size 24x24

=6

=4

• Problem: 2nx2n is not practical of large matrix multiplication.

A basic observation

Page 5: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

5

• Instead of regarding a complete row/col as an index

to the table, consider only part of it.

5

010010

011100

000111

111110

000110

101100

100100

100101

010111

011011

100010

110001

0

BA

0

1

000

001

010

011

100

000010011001010

The speedup of 4-Russians for matrix multiplication

The speedup

• Now, we pre-compute multiplication values for pairs

of binary vectors of size k in a table of size 2kx2k.

k

Page 6: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

0

66

010010

011100

000111

111110

000110

101100

100100

100101

010111

011011

100010

110001

1

BA

0

1

000

001

010

011

100

000010011001010

The speedup of 4-Russians for matrix multiplication

The speedupk

• Instead of regarding a complete row/col as an index

to the table, consider only part of it.

• Now, we pre-compute multiplication values for pairs

of binary vectors of size k in a table of size 2kx2k.

Page 7: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

7

• Let , then all pairs of k-bit binary vectors can

be represented in a table of size:

7

010010

011100

000111

111110

000110

101100

100100

100101

010111

011011

100010

110001

1

BA

The speedup of 4-Russians for matrix multiplication

The speedupk

1 12 2log log

2 2n n

n n

12 logk n

• Time required for multiplying Ai by Bi: O(n/logn).

• Total time required: O(n3/logn) instead of O(n3). 0

1

000

001

010

011

100

000010011001010

Page 8: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

8

Overview

A divide & conquer approach for matrix multiplication: Strassen’s method

The method of 4-Russians for matrix multiplication

Reduction between TC and BMM

Page 9: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

99

• Divide each nxn matrix into four matrices of size (n/2)x(n/2):

11 11 11 12 21

12 11 12 12 22

21 21 11 22 21

22 21 12 22 22

C A B A B

C A B A B

C A B A B

C A B A B

• Computing all of requires 8 multiplications and 4 additions.

• Therefore, the total running time is

• Using the Master Theorem, this solves to . Still cubic!

ijC

28 / 2T n T n n

3n

Can we do better with a straightforward divide and conquer approach?

CBA

CC

CC

BB

BB

AA

AA

2221

1211

2221

1211

2221

1211 .

Strassen’s method for matrix multiplication

A divide and conquer approach

Page 10: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1010

• Define seven matrices of size (n/2)x(n/2) :

1 11 22 11 22

2 21 22 11

3 11 12 22

4 22 21 11

M A A B B

M A A B

M A B B

M A B B

5 11 12 22

6 21 11 11 12

7 12 22 21 22

M A A B

M A A B B

M A A B B

• The four (n/2)x(n/2) matrices can be defined in terms of M1,…,M7:ijC

11 1 4 5 7

12 3 5

21 2 4

22 1 2 3 6

C M M M M

C M M

C M M

C M M M M

CBA

CC

CC

BB

BB

AA

AA

2221

1211

2221

1211

2221

1211 .

Strassen’s method for matrix multiplication

Strassen’s algorithm

Page 11: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

Running time? Each matrix Mi requires additions

and subtractions but only one multiplication:

which solves to 27 / 2T n T n n

11 1 4 5 7

12 3 5

21 2 4

22 1 2 3 6

C M M M M

C M M

C M M

C M M M M

807.2nO

1969Volker Strassen: Strassen’s method , First sub-cubic time algorithmO(n2.807)

Strassen’s method for matrix multiplication

Strassen’s algorithm

1111

Page 12: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1978V. Y. Pan. Strassen’s algorithm is not optimal. O(n2.796)

1979D. Bini et al. O(n2.7799) complexity for nxn approximate matrixmultiplication.

O(n2.7799)

1981A. Schönhage. Partial and total matrix multiplication.O(n2.522)

Strassen’s method for matrix multiplication

Improvements

1981CopperSmith and Winograd On the asymptotic complexity of matrix multiplication.

O(n2.496)

1986Volker StrassenO(n2.479)

1989CopperSmith and Winograd Matrix multiplication via arithmetic progressions.

O(n2.376)

2011Virginia Vassilevska Williams:Breaking the Coppersmith-Winograd barrier

O(n2.3727)

First to break the 2.5 barrier:

1212

Page 13: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1313

Best choices for matrix multiplication

Using the exact formulas for time complexity, for square matrices, crossover points

have been found:

• For n<7, the naïve algorithm for matrix multiplication is preferred.

As an example, a 6x6 matrix requires 482 steps for the method of

4- Russians, but 468 steps for the naïve multiplication.

• For 6<n<513, the method of 4-Russians is most efficient.

• For 512<n, Strassen’s approach costs the least number of steps.

Page 14: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

14

Overview

A divide & conquer approach for matrix multiplication: Strassen’s method

The method of 4-Russians for matrix multiplication

Reduction between TC and BMM

Page 15: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1515

Realization of matrix multiplication in graphs

Let A,B be adjacency matrices of two graphs over the same set of vertices {1,2,…,n}

• An (A,B)-path is a path of length two whose first edge belongs to A and its

second edge belongs to B.

00000

10000

00000

00000

00110

BA

1v2v

3v4v

5v

1

2

3

4

5

1234500100

00000

00000

01000

00000

1

2

3

4

5

1234500000

00100

00000

00000

01010

C

1

2

3

4

5

12345

• if and only if there is an (A,B)-path from vertex i to vertex j. Therefore, C is

the adjacency matrix with respect to (A,B)-paths.

1ijC

Page 16: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1616

Definition and a cubic solutionDefinition and a cubic solution

Given a directed graph G=(V,E), the transitive closure of G is defined as the graph

G*=(V,E*) where E*={(i,j) : there is a path from vertex i to vertex j}.

• A dynamic programming algorithm, has been devised:

0 , 1 ,T i j i j or i j E

1 1 1, , , ,k k k kT i j T i j T i k T k j

• Floyd-Warshall’s algorithm: 1,..., 1k 1,..., 1k

i j

i j

k

• Requires O(n3) time. Could it be beaten?

jkTkiTjiTjiT

njfor

nifor

nkfor

,,,,

..1

..1

..1

Transitive Closure by Matrix Multiplication

1,..., 1k

Page 17: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1717

Beating the cubic solutionBeating the cubic solution

By squaring the matrix, we get (i,j)=1 iff we can get from i to j in exactly two steps:

1

2

3

4

12340100

0010

0001

0000

0100

0010

0001

0000

0010

0001

0000

0000

How could we make (i,j) equal 1 iff there’s a path from i to j in at most 2 steps?

Storing 1’s in all diagonal entries.

What about (i,j)=1 iff there’s a path from i to j in at most 4 steps? Keep multiplying.

1v 2v

3v4v

4

1

2

3

1 432

0

0

0

1

100

001

011

110

0

0

0

1

100

001

011

110

0

0

0

0

000

011

110

100

1v 2v

3v4v

Transitive Closure by Matrix Multiplication

Page 18: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1818

Beating the cubic solutionBeating the cubic solution

In total, the longest path had 4 vertices and 2 multiplications are required.

1

2

3

4

12341100

0110

0011

0001

1100

0110

0011

0001

0111

0011

0001

0000

1v 2v

3v4v

How many multiplications are required for the general case? Log2(n).

• The transitive closure can be obtained in O(n2.37log2(n)) time:

• Multiply the matrix log2(n) times.

• Each multiplication requires O(n2.37) steps using Strassen’s approach.

• Better still: we can get rid of the log2(n) factor.

Transitive Closure by Matrix Multiplication

Page 19: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

1919

Better still: getting rid of the log(n) factorBetter still: getting rid of the log(n) factor

The log(n) factor can be dropped by applying the following steps:

• Determine the strongly connected components of the graph: O(n2)

a b c d

e f g h

• Collapse each component to a single vertex.

• The problem is now reduced to the problem for the new graph.

Transitive Closure by Matrix Multiplication

Page 20: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2020

Better still: getting rid of the log(n) factorBetter still: getting rid of the log(n) factor

The log(n) factor can be dropped by applying the following steps:

• Generate a topological sort for the new graph.

• Divide the graph into two sections: A (first half) and B (second half).

BA C

• The adjacency matrix of sorted graph is upper triangular:

0

A CG

B

Transitive Closure by Matrix Multiplication

Page 21: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

Better still: getting rid of the log(n) factorBetter still: getting rid of the log(n) factor

• Connections within A are independent of B.

• Similarly, connections within B are independent of A.

• Connections from A to B are found by: * *A C B

To find the transitive closure of G, notice that:

BA C

0

A CG

B

* * **

0 *

A A CBG

B

2121

i u in A

a path in A

A*(i,u) = 1

Transitive Closure by Matrix Multiplication

Page 22: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2222

Better still: getting rid of the log(n) factorBetter still: getting rid of the log(n) factor

• Connections within A are independent of B.

• Similarly, connections within B are independent of A.

• Connections from A to B are found by: * *A C B

To find the transitive closure of G, notice that:

BA C

0

A CG

B

* * **

0 *

A A CBG

B

u v

an edge in C

a path in A

i u in A andA*C(i,v) = 1

Transitive Closure by Matrix Multiplication

Page 23: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2323

Better still: getting rid of the log(n) factorBetter still: getting rid of the log(n) factor

• Connections within A are independent of B.

• Similarly, connections within B are independent of A.

• Connections from A to B are found by: * *A C B

To find the transitive closure of G, notice that:

BA C

0

A CG

B

* * **

0 *

A A CBG

B

and u j in Au v

an edge in C

a path in A

i u in A and

a path in B

A*CB*(i,j) = 1

Transitive Closure by Matrix Multiplication

Page 24: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2424

Better still: getting rid of the log(n) factorBetter still: getting rid of the log(n) factorTransitive Closure by Matrix Multiplication

BA C

0

A CG

B

* * **

0 *

A A CBG

B

• Connections within A are independent of B.

• Similarly, connections within A are independent of A.

To find the transitive closure of G, notice that:

• Hence, G* can be found with determining A*, B*, and computing A*CB*

• This requires finding the transitive closure of two (n/2)x(n/2) matrices,

• And performing two matrix multiplications: O(n2.37).

Running time? Solves to O(2.37) 2.37/ 2T n T n n

Page 25: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2525

Matrix Multiplication by Transitive Closure

Let A,B be two boolean matrices, to compute C=AB, form the following matrix:

0

0

0 0

I A

H I B

I

BA

• The transitive closure of such a graph is formed by adding the edges from

the 1st part to 2nd.

• These edges are described by the product of matrices A,B. Therefore,

* 0

0 0

I A AB

H I B

I

Page 26: 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

26

Thanks