Upload
buiphuc
View
215
Download
1
Embed Size (px)
Citation preview
Networks
Abbas El Gamal
Stanford University
Shannon Lecture, ISIT 2012
Introduction
∙ Phenomenal growth in communication and computation networks
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 2 / 57
Introduction
∙ Phenomenal growth in communication and computation networks
Mathematical theories:
é Shannon’s theory of information
é Turing’s theory of computation
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 2 / 57
Introduction
∙ Phenomenal growth in communication and computation networks
Mathematical theories:
é Shannon’s theory of information
é Turing’s theory of computation
Architectures:
é von Neumann computer, application-specific processors
é Packet-switched networks, cellular networks, layered protocols
Algorithms:
é Signal processing and coding
é Optimization and control
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 2 / 57
Introduction
∙ Phenomenal growth in communication and computation networks
Mathematical theories:
é Shannon’s theory of information
é Turing’s theory of computation
Architectures:
é von Neumann computer, application-specific processors
é Packet-switched networks, cellular networks, layered protocols
Algorithms:
é Signal processing and coding
é Optimization and control
VLSI technologies (Moore’s law, RF, CAD)
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 2 / 57
Introduction
∙ I have been especially intrigued by problems in networks
∙ Much of my work has been in theoretical and applied areas of networks
∙ Theory work focused on performance limits and how to achieve them
∙ This will be the unifying theme of my lecture
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 3 / 57
Introduction
∙ I have been especially intrigued by problems in networks
∙ Much of my work has been in theoretical and applied areas of networks
∙ Theory work focused on performance limits and how to achieve them
∙ This will be the unifying theme of my lecture
∙ Lecture has autobiographical component, historical perspective (Verdu)
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 3 / 57
Introduction
∙ I have been especially intrigued by problems in networks
∙ Much of my work has been in theoretical and applied areas of networks
∙ Theory work focused on performance limits and how to achieve them
∙ This will be the unifying theme of my lecture
∙ Lecture has autobiographical component, historical perspective (Verdu)
∙ More importantly, it is a tribute to the wonderful people
I learned from,
was inspired by, and
collaborated with
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 3 / 57
Introduction
∙ Several different topics instead of one big topic (Berlekamp)
Maximizes chance each of you will find one topic interesting
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 4 / 57
Introduction
∙ Several different topics instead of one big topic (Berlekamp)
Maximizes chance each of you will find one topic interesting
∙ No new results, but some results may not be familiar to all of you
∙ Problems stated informally with only proof sketches or no proofs
∙ Focus not only on results, but also on stories behind them
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 4 / 57
Outline
∙ Network information theory:
é Compress–forward for relay channel
é Capacity of deterministic interference channel
∙ VLSI:
é VLSI complexity of coding
é Reconfiguring VLSI arrays around defects
∙ Communication complexity:
é Computing cyclic shift
é Computing parity in noisy broadcast network
∙ Teaching network information theory
∙ Looking ahead
El Gamal (Stanford University) Introduction Shannon Lecture, ISIT 2012 5 / 57
First golden age of information theory
∙ Took place in 50s and early 60s mostly at LIDS
∙ It is a great honor to receive the Shannon award at MIT
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 6 / 57
Second golden age of information theory
∙ Took place in mid 70s and early 80s with major contributions by ISL
∙ I was lucky to do my PhD in the right place and at the right time
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 7 / 57
The early years
∙ My PhD was with Tom Cover on network information theory (NIT)é Broadcast channels
é Relay channel
é Multiple access channel with correlated sources
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 8 / 57
The early years
∙ My PhD was with Tom Cover on network information theory (NIT)
∙ After graduating, I started a course on NIT
∙ Worked on other NIT problems with great researchersé Multiple descriptions
é Relay networks
é Channels with state
é Interference channel
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 8 / 57
The early years
∙ Many of these early results received attention only 20+ years later
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 8 / 57
The early years
∙ Many of these early results received attention only 20+ years later
∙ Some I thought were leaves on the knowledge tree (Gallager)
Courtesy Amin El Gamal
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 8 / 57
The early years
∙ Many of these early results received attention only 20+ years later
∙ Some I thought were leaves on the knowledge tree (Gallager)
∙ Turned out to be budding shoots that grew into healthy branches
Courtesy Amin El Gamal
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 8 / 57
Outline
∙ Network information theory:
é Compress–forward for relay channel
é Capacity of deterministic interference channel
∙ VLSI:
é VLSI complexity of coding
é Reconfiguring VLSI arrays around defects
∙ Communication complexity:
é Computing cyclic shift
é Computing parity in noisy broadcast network
∙ Teaching network information theory
∙ Looking ahead
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 9 / 57
Relay channel (RC)
Xn1
Xn2
Y n2
Y n3
M ∈ [1 : 2nR] Mp(y2 , y3|x1 , x2)
x2i(Yi−12
)
Encoder Decoder
∙ What is the capacity C (highest achievable transmission rate R)?
∙ What coding scheme achieves it?
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 10 / 57
Relay channel (RC)
Xn1
Xn2
Y n2
Y n3
M ∈ [1 : 2nR] Mp(y2 , y3|x1 , x2)
x2i(Yi−12
)
Encoder Decoder
∙ First studied by van der Meulen (1971)
∙ Capacity is not known in general—key open problem in NIT
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 10 / 57
University of Hawaii
∙ In Spring 1976, Tom and I visited Norm Abramson
é ALOHA packet radio network
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 11 / 57
University of Hawaii
∙ In Spring 1976, Tom and I visited Norm Abramson
é ALOHA packet radio network
∙ Of course I expected to have great time in Hawaii ,
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 11 / 57
University of Hawaii
∙ In Spring 1976, Tom and I visited Norm Abramson
é ALOHA packet radio network
∙ David Slepian suggested I work on relay channel for my dissertation
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 11 / 57
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 12 / 57
Cutset upper bound
∙ Motivated by max-flow min-cut theorem by Ford–Fulkerson (1956) andElias–Feinstein–Shannon (1956)
Theorem 4 (Cover–EG 1979)
C ≤ maxp(x1 ,x2)
min�I(X1 , X2 ;Y3), I(X1 ;Y2,Y3 | X2)�
X1X1
X2
Y3Y3
Y2 : X2
R < I(X1 , X2 ;Y3)
Cooperative MAC bound
R < I(X1 ;Y2 ,Y3 |X2)
Cooperative BC bound
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 13 / 57
Cutset upper bound
∙ Motivated by max-flow min-cut theorem by Ford–Fulkerson (1956) andElias–Feinstein–Shannon (1956)
Theorem 4 (Cover–EG 1979)
C ≤ maxp(x1 ,x2)
min�I(X1 , X2 ;Y3), I(X1 ;Y2,Y3 | X2)�
∙ Tight for most classes of relay channels with known capacities
∙ EG (1981) extended it to networks
∙ Tight for most networks with known capacity regions
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 13 / 57
Block Markov schemes
M1 M2 M3 Mb−1 1
n
Block 1 Block 2 Block 3 Block b−1 Block b
∙ Codewords sent in block depend on message from previous block
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 14 / 57
Decode–forward (DF)
X1
Y2 : X2
Y3
)
(M j−1,M j)
M j M j−1
M j−1
Decode–forward lower bound (Cover–EG 1979)
C ≥ maxp(x1 ,x2)
min�I(X1 , X2;Y3), I(X1 ;Y2 | X2)�
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 15 / 57
Decode–forward (DF)
X1
Y2 : X2
Y3
)
(M j−1,M j)
M j M j−1
M j−1
Decode–forward lower bound (Cover–EG 1979)
C ≥ maxp(x1 ,x2)
min�I(X1 , X2;Y3), I(X1 ;Y2 | X2)�
Theorem 1
DF bound is tight for physically degraded RC: X1 → (Y2, X2) → Y3
∙ Capacity coincides with cutset bound
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 15 / 57
Relay channel with feedback
X1i
X2iY2i
Y3i
Y i−13
M Mp(y2 , y3|x1 , x2)
x2i�Yi−12
,Y i−13
�
Encoder Decoder
Theorem 3
CFB = maxp(x1 ,x2)
min �I(X1 , X2 ;Y3), I(X1 ;Y2,Y3 | X2)�
∙ Rare example where capacity is not known, but is known with feedback
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 16 / 57
DF sometimes performs poorly
X1
Y2 : X2
Y3
)
M M
∙ When channel X1 → Y2 is not better than X1 → Y3, DF performs poorly
∙ Led us to develop two other block coding schemes
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 17 / 57
Partial decode–forward
X1
Y2 : X2
Y3
)
(M�j−1,M
�j ,M
��j )
M�j M�
j−1
(M�j−1, M
��j−1)
Special case of Theorem 7
C ≥ maxp(u,x1 ,x2)
min �I(X1 , X2 ;Y3), I(U ; Y2 | X2) + I(X1 ;Y3 | X2 ,U)�
∙ Optimal for several RC classes
é Semideterministic relay (EG–Aref 1982)
é Deterministic relay networks with no interference (Aref 1980)
é Relay channel with orthogonal sender components (EG–Zahedi 2005)
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 18 / 57
Compress–forward (CF)
X1
Y2 : X2
Y3
)
M j
Yn2 j Y
n2, j−1
M j−1
Theorem 6 (Cover–EG 1979)
C ≥ maxp(x1)p(x2)p( y2|y2 ,x2)
I(X1 ; Y2,Y3 | X2),
such that I(X2 ;Y3) ≥ I(Y2 ; Y2 |X2 ,Y3)
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 19 / 57
Compress–forward (CF)
X1
Y2 : X2
Y3
)
M j
Yn2 j Y
n2, j−1
M j−1
Theorem 6 (Cover–EG 1979)
C ≥ maxp(x1)p(x2)p( y2|y2 ,x2)
I(X1 ; Y2,Y3 | X2),
such that I(X2 ;Y3) ≥ I(Y2 ; Y2 |X2 ,Y3)
∙ We didn’t prove any optimality results
∙ Never mentioned in Cover–Thomas!
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 19 / 57
20+ years later
∙ Cover–Kim (2007) showed that CF is optimal for a deterministic RC
∙ Aleksic–Razaghi–Yu (2009) found another example where CF is optimal
Shows that cutset bound not tight in general
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 20 / 57
20+ years later
∙ EG–Mohseni–Zahedi (2006) established equivalent characterization:
CF lower bound
C ≥ maxp(x1)p(x2)p( y2|y2,x2 )
min �I(X1 , X2 ;Y3) − I(Y2 ; Y2 | X1 , X2 ,Y3), I(X1 ; Y2 ,Y3 | X2)�
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 20 / 57
20+ years later
∙ EG–Mohseni–Zahedi (2006) established equivalent characterization:
CF lower bound
C ≥ maxp(x1)p(x2)p( y2|y2,x2 )
min �I(X1 , X2 ;Y3) − I(Y2 ; Y2 | X1 , X2 ,Y3), I(X1 ; Y2 ,Y3 | X2)�
∙ Ahlswede–Cai–Li–Yeung (2000): Network coding for graphical networks
∙ Det. nets (Ratnakar–Kramer 2006, Avestimehr–Diggavi–Tse 2011)
∙ Erasure nets (Dana–Gowaikar–Palanki–Hassibi–Effros 2006)
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 20 / 57
20+ years later
∙ EG–Mohseni–Zahedi (2006) established equivalent characterization:
CF lower bound
C ≥ maxp(x1)p(x2)p( y2|y2,x2 )
min �I(X1 , X2 ;Y3) − I(Y2 ; Y2 | X1 , X2 ,Y3), I(X1 ; Y2 ,Y3 | X2)�
∙ Ahlswede–Cai–Li–Yeung (2000): Network coding for graphical networks
∙ Det. nets (Ratnakar–Kramer 2006, Avestimehr–Diggavi–Tse 2011)
∙ Erasure nets (Dana–Gowaikar–Palanki–Hassibi–Effros 2006)
∙ Lim–Kim–EG–Chung (2011): Noisy network coding (ISIT 2010 plenary)
é Naturally extends equivalent characterization of CF to networks
é Includes network coding and its extensions as special cases!
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 20 / 57
20+ years later
∙ EG–Mohseni–Zahedi (2006) established equivalent characterization:
CF lower bound
C ≥ maxp(x1)p(x2)p( y2|y2,x2 )
min �I(X1 , X2 ;Y3) − I(Y2 ; Y2 | X1 , X2 ,Y3), I(X1 ; Y2 ,Y3 | X2)�
∙ Ahlswede–Cai–Li–Yeung (2000): Network coding for graphical networks
∙ Det. nets (Ratnakar–Kramer 2006, Avestimehr–Diggavi–Tse 2011)
∙ Erasure nets (Dana–Gowaikar–Palanki–Hassibi–Effros 2006)
∙ Lim–Kim–EG–Chung (2011): Noisy network coding (ISIT 2010 plenary)
∙ CF turned out be the beginning of general scheme for noisy networks
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 20 / 57
Outline
∙ Network information theory:
é Compress–forward for relay channel
é Capacity of deterministic interference channel
∙ VLSI:
é VLSI complexity of coding
é Reconfiguring VLSI arrays around defects
∙ Communication complexity:
é Computing cyclic shift
é Computing parity in noisy broadcast network
∙ Teaching network information theory
∙ Looking ahead
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 21 / 57
Interference channel (IC)
M1 ∈ [1 : 2nR1 ]
M2 ∈ [1 : 2nR2 ]
Xn1
Xn2
Y n1
Y n2
M1
M2
Encoder 1
Encoder 2
Decoder 1
Decoder 2
p(y1 , y2|x1 , x2)
∙ What is the capacity region C (set of achievable (R1 , R2) rates)?
∙ What coding scheme achieves it?
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 22 / 57
Interference channel (IC)
M1 ∈ [1 : 2nR1 ]
M2 ∈ [1 : 2nR2 ]
Xn1
Xn2
Y n1
Y n2
M1
M2
Encoder 1
Encoder 2
Decoder 1
Decoder 2
p(y1 , y2|x1 , x2)
∙ First studied by Ahlswede (1974)
∙ Capacity region is not known in general—key open problem in NIT
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 22 / 57
Interference channel (IC)
M1 ∈ [1 : 2nR1 ]
M2 ∈ [1 : 2nR2 ]
Xn1
Xn2
Y n1
Y n2
M1
M2
Encoder 1
Encoder 2
Decoder 1
Decoder 2
p(y1 , y2|x1 , x2)
∙ First studied by Ahlswede (1974)
∙ Capacity region is not known in general
∙ Han–Kobayashi (1981) established tightest known inner bound
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 22 / 57
Gaussian interference channel (G-IC)
Z1
Z2
X1
X2
Y1
Y2
12
21
22
11
∙ Costa was interested in optimality of Han–Kobayashi for Gaussian IC
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 23 / 57
Injective deterministic interference channel (D-IC)
X1
X2
Y1
Y2
T2
T1
t1(x1)
t2(x2)
y1(x1 , t2)
y2(x2 , t1)
∙ Costa was interested in optimality of Han–Kobayashi for Gaussian IC
∙ EG–Costa (1982) introduce Gaussian-like deterministic IC
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 24 / 57
Injective deterministic interference channel (D-IC)
X1
X2
Y1
Y2
T2
T1
t1(x1)
t2(x2)
y1(x1 , t2)
y2(x2 , t1)
∙ Costa was interested in optimality of Han–Kobayashi for Gaussian IC
∙ EG–Costa (1982) introduce Gaussian-like deterministic IC
∙ We showed that H–K is optimal for this class
é Only known example for which general H–K is optimal
é First to use genie idea in proof of converse
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 24 / 57
Injective deterministic interference channel (D-IC)
X1
X2
Y1
Y2
T2
T1
t1(x1)
t2(x2)
y1(x1 , t2)
y2(x2 , t1)
∙ Costa was interested in optimality of Han–Kobayashi for Gaussian IC
∙ EG–Costa (1982) introduce Gaussian-like deterministic IC
∙ We showed that H–K is optimal for this class
∙ But, we didn’t establish any real connection to G-IC
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 24 / 57
25 years later dots connected between D-IC and G-IC
X1
X2
Y1
Y2
T2
T1
t1(x1)
t2(x2)
y1(x1 , t2)
y2(x2 , t1)
∙ Etkin–Tse–Wang (2008) used it in proof of 1/2-bit gap theorem for G-IC
∙ This connection formalized further by (Telatar–Tse 2007)
∙ Avestimehr–Diggavi–Tse (2011) introduced det. approximation of G-IC
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 25 / 57
25 years later dots connected between D-IC and G-IC
X1
X2
Y1
Y2
T2
T1
t1(x1)
t2(x2)
y1(x1 , t2)
y2(x2 , t1)
∙ Etkin–Tse–Wang (2008) used it in proof of 1/2-bit gap theorem for G-IC
∙ This connection formalized further by (Telatar–Tse 2007)
∙ Avestimehr–Diggavi–Tse (2011) introduced det. approximation of G-IC
∙ Our D-IC result helped spur new direction in NIT
El Gamal (Stanford University) Part I: Network IT Shannon Lecture, ISIT 2012 25 / 57
Outline
∙ Network information theory:
é Compress–forward for relay channel
é Capacity of deterministic interference channel
∙ VLSI:
é VLSI complexity of coding
é Reconfiguring VLSI arrays around defects
∙ Communication complexity:
é Computing cyclic shift
é Computing parity in noisy broadcast network
∙ Teaching network information theory
∙ Looking ahead
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 26 / 57
USC
∙ My first faculty job was at USC
∙ It was privilege to be with coding and communication giants
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 27 / 57
USC
∙ Biggest influence on my career came from Carver Mead (Caltech)
é Renowned device physicist and National Medal of Technology winner
é Father of modern VLSI (chip) design
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 27 / 57
USC
∙ Biggest influence on my career came from Carver Mead (Caltech)
∙ At the time, chips designed by hand using trial-and-error approach
∙ Algorithm and system developers knew nothing about chip design
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 27 / 57
USC
∙ Biggest influence on my career came from Carver Mead (Caltech)
∙ At the time, chips designed by hand using trial-and-error approach
∙ Algorithm and system developers knew nothing about chip design
∙ Carver realized that this approach will not scale with Moore’s law
∙ Introduced silicon compiler approach to design (Mead–Conway 1980)
∙ Enabled design of today’s complex chips
é Dramatically improving design productivity
é Having algorithm/system designers involved in high level chip design
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 27 / 57
USC
∙ Biggest influence on my career came from Carver Mead (Caltech)
∙ At the time, chips designed by hand using trial-and-error approach
∙ Algorithm and system developers knew nothing about chip design
∙ Carver realized that this approach will not scale with Moore’s law
∙ Introduced silicon compiler approach to design (Mead–Conway 1980)
∙ Enabled design of today’s complex chips
∙ Motivated me to work on applied and theoretical problems in VLSI
∙ Will describe two examples of my theoretical work
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 27 / 57
VLSI complexity
∙ Introduced by Thompson (1980)
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 28 / 57
VLSI complexity
∙ Consider rectangular chip for computing function over {0, 1}n
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 28 / 57
VLSI complexity
∙ Computation network (circuit) for is layed-out on grid:
W
L
é Grid point: input, output, logic/memory element, wire crossing
é Grid line: constant number of wires
é Each wire carries constant number of bits/clock cycle
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 28 / 57
VLSI complexity
∙ Computation network (circuit) for is layed-out on grid:
W
L
é Grid point: input, output, logic/memory element, wire crossing
é Grid line: constant number of wires
é Each wire carries constant number of bits/clock cycle
é Chip area for : A =W × L grid squares (W ≤ L)
é Computation time for : T clock cycles
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 28 / 57
VLSI complexity
∙ Thompson (1980) used cutset argument to obtain lower bound on AT2
W
L
I
é Bisect chip such that each side has n/2 inputs
é Let I be min # of bits exchanged in any chip that computes
∙ I ≤ cWT ≤ c$AT ⇒ AT2 = Ω�I2�
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 28 / 57
VLSI complexity
∙ Thompson (1980) used cutset argument to obtain lower bound on AT2
W
L
I
é Bisect chip such that each side has n/2 inputs
é Let I be min # of bits exchanged in any chip that computes
∙ I ≤ cWT ≤ c$AT ⇒ AT2 = Ω�I2�
∙ Tight for sorting, DFT, matrix multiplication, . . .
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 28 / 57
VLSI complexity of coding
∙ Consider chip for (n, R, t) error correction coding (encoding/decoding)
∙ Arguments by Savage (1971) imply that: AT2 ≥ n log(t + 1)
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 29 / 57
VLSI complexity of coding
∙ Consider chip for (n, R, t) error correction coding (encoding/decoding)
∙ Arguments by Savage (1971) imply that: AT2 ≥ n log(t + 1)
∙ EG–Greene–Pang (1984) showed:
AT 2 lower bound I
Any chip for (n, R, t) error correction coding must satisfy
AT2 = Ω(nR2 t)
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 29 / 57
Proof sketch (for decoding)
W
L
∙ Partition chip into 2n/t blocks, each with Rt/2 outputs
∙ On average, each block has t/2 inputs and perimeter Θ(xAt/n )
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 30 / 57
Proof sketch (for decoding)
replacementsW
L
∙ Partition chip into 2n/t blocks, each with Rt/2 outputs
∙ On average, each block has t/2 inputs and perimeter Θ(xAt/n )
∙ There exists block with ≤ t inputs and perimeter O(xAt/n )
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 30 / 57
Proof sketch (for decoding)
W
L
00
0
∙ Partition chip into 2n/t blocks, each with Rt/2 outputs
∙ On average, each block has t/2 inputs and perimeter Θ(xAt/n )
∙ There exists block with ≤ t inputs and perimeter O(xAt/n )
∙ Suppose all ≤ t inputs in block are set to zero by errors:
Then, I ≥ Rt/2 bits must flow into the block in time T
But I ≤ cT × perimeter = O(TxAt/n ) ⇒ AT2 = Ω(nR2 t)El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 30 / 57
VLSI complexity of coding
∙ EG–Greene–Pang (1984) extended argument to show
AT 2 lower bound II
Any chip for (n, R, Pe) error correction coding must satisfy
AT2 = Ω�nR2 log(n/Pe)�
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 31 / 57
VLSI complexity of coding
∙ EG–Greene–Pang (1984) extended argument to show
AT 2 lower bound II
Any chip for (n, R, Pe) error correction coding must satisfy
AT2 = Ω�nR2 log(n/Pe)�
∙ No related work since it was presented at VLSI conference in this room!
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 31 / 57
VLSI complexity of coding
∙ EG–Greene–Pang (1984) extended argument to show
AT 2 lower bound II
Any chip for (n, R, Pe) error correction coding must satisfy
AT2 = Ω�nR2 log(n/Pe)�
∙ No related work since it was presented at VLSI conference in this room!
∙ Rediscovered by Grover–Goldsmith–Sahai (2012)
∙ Their work will be presented tomorrow!
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 31 / 57
Configuring VLSI arrays around defects
∙ VLSI chips suffer from manufacturing defects
∙ Discarding every chip that has defect makes yield unacceptably low
∙ Motivated much work on approaches to fault tolerance
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 32 / 57
Configuring VLSI arrays around defects
∙ VLSI chips suffer from manufacturing defects
∙ Discarding every chip that has defect makes yield unacceptably low
∙ Motivated much work on approaches to fault tolerance
∙ One approach: Treat defects as noise and use coding
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 32 / 57
Configuring VLSI arrays around defects
∙ VLSI chips suffer from manufacturing defects
∙ Discarding every chip that has defect makes yield unacceptably low
∙ Motivated much work on approaches to fault tolerance
∙ One approach: Treat defects as noise and use coding
∙ Can reduce redundancy by finding defects and using this knowledge:
é As side information for coding (Kuznetsov–Tsybakov 1974)
é To reconfigure system around defects, routinely performed in memory chips
∙ Greene–EG (1984) investigated latter approach for processor arrays
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 32 / 57
Example: Configuring processor chain around defects
1 2 k
∙ Build chip with chain of k processors
∙ Each processor is independently defective with probability p
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 33 / 57
Example: Configuring processor chain around defects
1 2 n
∙ Build chip with chain of k processors
∙ Each processor is independently defective with probability p
∙ Build linear array of n processors with configurable connections
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 33 / 57
Example: Configuring processor chain around defects
∙ Build chip with chain of k processors
∙ Each processor is independently defective with probability p
∙ Build linear array of n processors with configurable connections
∙ Configure k good processors into chain
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 33 / 57
Example: Configuring processor chain around defects
∙ Build chip with chain of k processors
∙ Each processor is independently defective with probability p
∙ Build linear array of n processors with configurable connections
∙ Configure k good processors into chain
∙ By LLN, this can be done w.h.p. if n > k/(1 − p)
∙ However, if n = Θ(k), longest connection length = Θ(log n) w.h.p.
Results in high interprocessor delay—key metric in chip design
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 33 / 57
Example: Configuring processor chain around defects
∙ Build 2-D array of $n × $n processors with configurable connections
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 34 / 57
Example: Configuring processor chain around defects
∙ Build 2-D array of $n × $n processors with configurable connections
∙ Configure k good processors into chain
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 34 / 57
Example: Configuring processor chain around defects
∙ Build 2-D array of $n × $n processors with configurable connections
∙ Configure k good processors into chain
∙ Greene–EG (1984) showed that w.h.p., chain can be configured withn = Θ(k) and with longest connection length = Θ(1)!
∙ Proof uses percolation theory
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 34 / 57
Field Programmable Gate Arrays (FPGAs)
Actel 1280 FPGA chip photo, 1990
El Gamal (Stanford University) Part II: VLSI Shannon Lecture, ISIT 2012 35 / 57
Outline
∙ Network information theory:
é Compress–forward for relay channel
é Capacity of deterministic interference channel
∙ VLSI:
é VLSI complexity of coding
é Reconfiguring VLSI arrays around defects
∙ Communication complexity:
é Computing cyclic shift
é Computing parity in noisy broadcast network
∙ Teaching network information theory
∙ Looking ahead
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 36 / 57
Communication complexity
∙ NIT deals with information flow for communication in networks
é Large iid block, diminishing error probability, single-letter characterization
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 37 / 57
Communication complexity
∙ NIT deals with information flow for communication in networks
é Large iid block, diminishing error probability, single-letter characterization
∙ VLSI complexity lower bound based on information flow for computing
é Deterministic, single-instance, zero error probability, min # of bits exchanged
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 37 / 57
Communication complexity
∙ NIT deals with information flow for communication in networks
é Large iid block, diminishing error probability, single-letter characterization
∙ VLSI complexity lower bound based on information flow for computing
é Deterministic, single-instance, zero error probability, min # of bits exchanged
∙ Yao (1979), Turing award winner, introduced communication complexity
é Limits on information flow for distributed computing under latter setup
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 37 / 57
Communication complexity
Alice Bob
X
(X ,Y )
Y
(X ,Y )
Ml (X ,Ml−1)
Ml+1(Y ,Ml )
∙ X and Y chosen from finite sets
∙ They wish to compute (X , Y )
∙ They communicate in rounds over noiseless 2-way link
∙ What is the communication complexity C() (min # of bits exchanged)?
∙ What protocol achieves it?
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 38 / 57
Communication complexity
∙ Pang–EG (1986), Orlitsky–EG (1990) resolved several open problems
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 38 / 57
Communication complexity of cyclic shift
∙ Example 4 in (EG–Orlitsky 1984) (suggested by Tom Cover)
∙ X ∈ {0, 1}n, Y is a cyclic shift of X
∙ (X ,Y ) ∈ {0, 1, . . . , n − 1} is shift amount
∙ What is communication complexity C()?
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 39 / 57
Communication complexity of cyclic shift
∙ Example 4 in (EG–Orlitsky 1984) (suggested by Tom Cover)
∙ X ∈ {0, 1}n, Y is a cyclic shift of X
∙ (X ,Y ) ∈ {0, 1, . . . , n − 1} is shift amount
∙ What is communication complexity C()?
∙ Trivial upper bound: C() ≤ n + ⌈log n⌉
∙ Lower bound: C() ≥ ⌈2�1 − 2−(n/2−1)� log n⌉
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 39 / 57
Communication complexity of cyclic shift
∙ Example 4 in (EG–Orlitsky 1984) (suggested by Tom Cover)
∙ X ∈ {0, 1}n, Y is a cyclic shift of X
∙ (X ,Y ) ∈ {0, 1, . . . , n − 1} is shift amount
∙ What is communication complexity C()?
∙ Trivial upper bound: C() ≤ n + ⌈log n⌉
∙ Lower bound: C() ≥ ⌈2�1 − 2−(n/2−1)� log n⌉
∙ We showed that: C() ≤ 2⌈log n⌉
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 39 / 57
Communication complexity of cyclic shift
∙ Example 4 in (EG–Orlitsky 1984) (suggested by Tom Cover)
∙ X ∈ {0, 1}n, Y is a cyclic shift of X
∙ (X ,Y ) ∈ {0, 1, . . . , n − 1} is shift amount
∙ What is communication complexity C()?
∙ Trivial upper bound: C() ≤ n + ⌈log n⌉
∙ Lower bound: C() ≥ ⌈2�1 − 2−(n/2−1)� log n⌉
∙ We showed that: C() ≤ 2⌈log n⌉
∙ Our scheme: X = 0110100011101010, Y = 1010011010001110, = 4
Let Z = 1110101001101000 be largest among all shifts of X (and Y)
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 39 / 57
Communication complexity of cyclic shift
∙ Example 4 in (EG–Orlitsky 1984) (suggested by Tom Cover)
∙ X ∈ {0, 1}n, Y is a cyclic shift of X
∙ (X ,Y ) ∈ {0, 1, . . . , n − 1} is shift amount
∙ What is communication complexity C()?
∙ Trivial upper bound: C() ≤ n + ⌈log n⌉
∙ Lower bound: C() ≥ ⌈2�1 − 2−(n/2−1)� log n⌉
∙ We showed that: C() ≤ 2⌈log n⌉
∙ Our scheme: X = 0110100011101010, Y = 1010011010001110, = 4
Let Z = 1110101001101000 be largest among all shifts of X (and Y)
Alice sends shift amount from Z to X
Bob sends shift amount from Z to Y
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 39 / 57
Communication complexity in presence of noise
∙ Communication complexity setup assumes error-free communication
∙ Real-world computing networks suffer from noise
∙ What is the communication complexity in presence of noise?
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 40 / 57
Communication complexity in presence of noise
∙ Communication complexity setup assumes error-free communication
∙ Real-world computing networks suffer from noise
∙ What is the communication complexity in presence of noise?
∙ Related to reliable computing with unreliable components studied by
von Neumann, Moore, Shannon, Elias, Dobrushin, Winograd, . . .
é Different setups
é Different conclusions
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 40 / 57
Communication complexity in presence of noise
∙ Communication complexity setup assumes error-free communication
∙ Real-world computing networks suffer from noise
∙ What is the communication complexity in presence of noise?
∙ Related to reliable computing with unreliable components studied by
von Neumann, Moore, Shannon, Elias, Dobrushin, Winograd, . . .
é Different setups
é Different conclusions
∙ I proposed a simple reliable distributed computing problem
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 40 / 57
Noisy broadcast network (in Cover–Gopinath (1987))
s1 s2 sn−1 sn
∙ Node j ∈ [1 : n] has a bit s j
∙ Node 1 wishes to compute the parity function ⊕(s1 , s2, . . . , sn)
∙ Nodes communicate in 1-bit rounds over broadcast network
∙ Bit transmitted by node j depends on s j and its past received bits
∙ Transmitted bit is received via independent BSC(p) at every other node
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 41 / 57
Noisy broadcast network (in Cover–Gopinath (1987))
s1 s2 sn−1 sn
∙ What is C, min # of bits exchanged to compute ⊕(sn) with Pe < є?
∙ What protocol achieves it?
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 41 / 57
Noisy broadcast network (in Cover–Gopinath (1987))
s1 s2 sn−1 sn
∙ What is C, min # of bits exchanged to compute ⊕(sn) with Pe < є?
∙ What protocol achieves it?
∙ Trivial lower bound: C = Ω(n)
∙ Simple upper bound: C = O(n log n) (repetition)
∙ Can we do better?
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 41 / 57
Noisy broadcast network (in Cover–Gopinath (1987))
s1 s2 sn−1 sn
∙ What is C, min # of bits exchanged to compute ⊕(sn) with Pe < є?
∙ What protocol achieves it?
∙ Trivial lower bound: C = Ω(n)
∙ Simple upper bound: C = O(n log n) (repetition)
∙ Can we do better?
∙ Gallager (1988) showed: C = O(n log log n)
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 41 / 57
Gallager’s O(n log log n) scheme for parity
s1 s2 sn−1 sn
∙ Each node broadcasts its bit Θ(log log n) times
∙ Nodes pre-partitioned into groups of Θ(log n) nodes
∙ Each node estimates its group’s parity, broadcasts it once
∙ Node 1 receives Θ(log n) estimates of each group’s parity
Makes reliable estimate of each group’s parity
Adds them mod 2 to estimate overall parity
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 42 / 57
Gallager’s O(n log log n) scheme for parity
s1 s2 sn−1 sn
∙ Each node broadcasts its bit Θ(log log n) times
∙ Nodes pre-partitioned into groups of Θ(log n) nodes
∙ Each node estimates its group’s parity, broadcasts it once
∙ Node 1 receives Θ(log n) estimates of each group’s parity
Makes reliable estimate of each group’s parity
Adds them mod 2 to estimate overall parity
∙ Gallager extended scheme to recovering all bits with C = O(n log log n)
Each node broadcasts parity of a different group of Θ(log n) nodes
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 42 / 57
Follow-on work
s1 s2 sn−1 sn
∙ Problem received no further attention from IT community
∙ Yao (1997) popularized it in theoretical CS community
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 43 / 57
Follow-on work
s1 s2 sn−1 sn
∙ Problem received no further attention from IT community
∙ Yao (1997) popularized it in theoretical CS community
∙ Goyal–Kindler–Saks (2008) showed that:
é Gallager’s O(n log log n) scheme for recovering all bits is order tight
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 43 / 57
Follow-on work
s1 s2 sn−1 sn
∙ Problem received no further attention from IT community
∙ Yao (1997) popularized it in theoretical CS community
∙ Goyal–Kindler–Saks (2008) showed that:
é Gallager’s O(n log log n) scheme for recovering all bits is order tight
é But for computing ⊕(sn): C = O(n)!
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 43 / 57
Follow-on work
s1 s2 sn−1 sn
∙ Problem received no further attention from IT community
∙ Yao (1997) popularized it in theoretical CS community
∙ Goyal–Kindler–Saks (2008) showed that:
é Gallager’s O(n log log n) scheme for recovering all bits is order tight
é But for computing ⊕(sn): C = O(n)!
∙ Goyal–Kindler–Saks O(n) scheme for parity:
é Computes Hamming weight w(sn) reliablyé Each node broadcasts its bit a constant number of times
é w(sn) is estimated using O(n) rounds of binary questions
El Gamal (Stanford University) Part III: Communication complexity Shannon Lecture, ISIT 2012 43 / 57
Outline
∙ Network information theory:
é Compress–forward for relay channel
é Capacity of deterministic interference channel
∙ VLSI:
é VLSI complexity of coding
é Reconfiguring VLSI arrays around defects
∙ Communication complexity:
é Computing cyclic shift
é Computing parity in noisy broadcast network
∙ Teaching network information theory
∙ Looking ahead
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 44 / 57
Third golden age of information theory
∙ Started in mid 90s
Fueled by the Internet and cellular wireless communication
∙ With contributions by many researchers around the world
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 45 / 57
Back to the future
∙ I was drawn back to NIT mainly by growing student interest
∙ Collaborated with great researchers on old and new problems
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 45 / 57
Back to the future
∙ I was drawn back to NIT mainly by growing student interest
∙ Collaborated with great researchers on old and new problems
∙ Most significant project was teaching network information theory
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 45 / 57
Teaching NIT
∙ I started teaching NIT again in 2002 (after 18 years!)
∙ The first class had several rising stars, including Young-Han Kim
∙ We converted lecture notes into book—with help from many of you
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 46 / 57
Teaching NIT
∙ Book presents NIT models and results in simple and unified manner
Courtesy Young-Han Kim
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 46 / 57
Teaching NIT made simple
∙ There are several viable ways to teach a first course on IT (Verdu)
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 47 / 57
Teaching NIT made simple
∙ There are several viable ways to teach a first course on IT (Verdu)
∙ Currently, there is only one unified way to teach NIT:
é Random coding (Shannon)
é Typicality (Shannon, Forney, Cover)
é Weak converse (Fano)
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 47 / 57
Teaching NIT made simple
∙ There are several viable ways to teach a first course on IT (Verdu)
∙ Currently, there is only one unified way to teach NIT:
é Random coding (Shannon)
é Typicality (Shannon, Forney, Cover)
é Weak converse (Fano)
∙ Robust typicality (Orlitsky–Roche 2001):
T(n)є (X) = �xn ∈ X
n: |π(x |xn) − p(x)| ≤ єp(x) for all x ∈ X �,
where
π(x |xn) = |{i: xi = x}|
nfor x ∈ X
é Can use it to prove achievability for all discrete memoryless (DM) systems
é Lossless source coding corollary of lossy source coding
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 47 / 57
Teaching NIT made simple
∙ There are several viable ways to teach a first course on IT (Verdu)
∙ Currently, there is only one unified way to teach NIT:
é Random coding (Shannon)
é Typicality (Shannon, Forney, Cover)
é Weak converse (Fano)
∙ Robust typicality (Orlitsky–Roche 2001):
é Can use it to prove achievability for all discrete memoryless (DM) systems
é Lossless source coding corollary of lossy source coding
∙ What about achievability for Gaussian models?
é Extend proofs for DM ⇒ DM with cost
é Discretize signals, use appropriate limit theorems (McEliece)
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 47 / 57
Teaching NIT made simple
∙ There are several viable ways to teach a first course on IT (Verdu)
∙ Currently, there is only one unified way to teach NIT:
é Random coding (Shannon)
é Typicality (Shannon, Forney, Cover)
é Weak converse (Fano)
∙ Robust typicality (Orlitsky–Roche 2001):
é Can use it to prove achievability for all discrete memoryless (DM) systems
é Lossless source coding corollary of lossy source coding
∙ What about achievability for Gaussian models?
é Extend proofs for DM ⇒ DM with cost
é Discretize signals, use appropriate limit theorems (McEliece)
∙ Where would IT be today if Shannon had considered only Gaussian?
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 47 / 57
NIT course
∙ Prerequisites: basic probability, MSE estimation, convexity
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 48 / 57
NIT course
∙ Prerequisites: basic probability, MSE estimation, convexity
∙ Course syllabus can be customized to different audiences
é First or second course on IT
é Interest in theory versus applications (e.g., communications)
é Channel coding versus source coding
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 48 / 57
NIT course
∙ Prerequisites: basic probability, MSE estimation, convexity
∙ Course syllabus can be customized to different audiences
∙ My course is for EE students in IT, comm, networking, multimedia
∙ Focuses on models, coding schemes and techniques
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 48 / 57
NIT course
∙ Prerequisites: basic probability, MSE estimation, convexity
∙ Course syllabus can be customized to different audiences
∙ My course is for EE students in IT, comm, networking, multimedia
∙ Focuses on models, coding schemes and techniques
∙ Some achievability proofs
é Key lemmas (packing, covering, . . . )
é Detailed proofs for basic building blocks (MAC, BC, IC, . . . )
é Proof sketches for more complex results
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 48 / 57
NIT course
∙ Prerequisites: basic probability, MSE estimation, convexity
∙ Course syllabus can be customized to different audiences
∙ My course is for EE students in IT, comm, networking, multimedia
∙ Focuses on models, coding schemes and techniques
∙ Some achievability proofs
∙ Few converse proofs
é Fano’s converse for DMC (with feedback)
é Converse for lossy source coding theorem
é Gallager’s converse for degraded BC
é Converse for more capable BC (Csiszar sum)
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 48 / 57
NIT course
∙ Prerequisites: basic probability, MSE estimation, convexity
∙ Course syllabus can be customized to different audiences
∙ My course is for EE students in IT, comm, networking, multimedia
∙ Focuses on models, coding schemes and techniques
∙ Some achievability proofs
∙ Few converse proofs
∙ Final projects, many of which have led to research publications
El Gamal (Stanford University) Part IV: Teaching NIT Shannon Lecture, ISIT 2012 48 / 57
Summary
∙ Presented examples of work on limits on performance of networks
é Some results were unexpected, like good jokes (Cover)
é Some were like buds on a tree—difficult to predict how they will develop
é Others lead to unintended consequences (configuring VLS arrays ⇒ FPGAs)
El Gamal (Stanford University) Summary Shannon Lecture, ISIT 2012 49 / 57
Summary
∙ Presented examples of work on limits on performance of networks
é Some results were unexpected, like good jokes (Cover)
é Some were like buds on a tree—difficult to predict how they will develop
é Others lead to unintended consequences (configuring VLS arrays ⇒ FPGAs)
∙ Recurring themes (setups, cutset bound, n and log n)
El Gamal (Stanford University) Summary Shannon Lecture, ISIT 2012 49 / 57
Summary
∙ Presented examples of work on limits on performance of networks
é Some results were unexpected, like good jokes (Cover)
é Some were like buds on a tree—difficult to predict how they will develop
é Others lead to unintended consequences (configuring VLS arrays ⇒ FPGAs)
∙ Recurring themes (setups, cutset bound, n and log n)
∙ NIT is now ready to be taught to wider audiences
é Should consider teaching it in graduate comm/networking curriculum
El Gamal (Stanford University) Summary Shannon Lecture, ISIT 2012 49 / 57
Looking ahead
∙ Much remains to be done on performance limits of networks
∙ Many basic open problems in NIT (BC, IC, RC, . . . )
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 50 / 57
Looking ahead
∙ Much remains to be done on performance limits of networks
∙ Many basic open problems in NIT (BC, IC, RC, . . . )
∙ Problems appear to be very hard, many talks open with:
Here is the information flow problem. We don’t know capacity
even for simple building blocks, so let’s do something else:
approximate, find capacity scaling, study wired networks, . . .
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 50 / 57
Looking ahead
∙ Much remains to be done on performance limits of networks
∙ Many basic open problems in NIT (BC, IC, RC, . . . )
∙ Problems appear to be very hard, many talks open with:
Here is the information flow problem. We don’t know capacity
even for simple building blocks, so let’s do something else:
approximate, find capacity scaling, study wired networks, . . .
∙ These are interesting and potentially fruitful research directions
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 50 / 57
Looking ahead
∙ Much remains to be done on performance limits of networks
∙ Many basic open problems in NIT (BC, IC, RC, . . . )
∙ Problems appear to be very hard, many talks open with:
Here is the information flow problem. We don’t know capacity
even for simple building blocks, so let’s do something else:
approximate, find capacity scaling, study wired networks, . . .
∙ These are interesting and potentially fruitful research directions
∙ But, shouldn’t stop us from working on basic open problems
∙ Resolution of basic problems has had most impact on theory, practice
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 50 / 57
Looking ahead
∙ Much remains to be done on performance limits of networks
∙ Many basic open problems in NIT (BC, IC, RC, . . . )
∙ Problems appear to be very hard, many talks open with:
Here is the information flow problem. We don’t know capacity
even for simple building blocks, so let’s do something else:
approximate, find capacity scaling, study wired networks, . . .
∙ These are interesting and potentially fruitful research directions
∙ But, shouldn’t stop us from working on basic open problems
∙ Resolution of basic problems has had most impact on theory, practice
∙ Remember what Shannon said in his 1956 Bandwagon paper:
Seldom do more than a few of nature’s secrets give way at one time
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 50 / 57
Looking ahead
∙ We should also think broadly
∙ The founder of our field worked in many areas (Sloan–Wyner 1993)
é Logic design and reliability
é Genetics
é Signal processing (sampling theory)
é Cryptography
é Communication and information theory
é Computer science
é Linguistics
é Finance (gambling)
é Gadgetry (rocket-powered frisbee, mechanical Rubiks cube solver, unicycles,chess-playing machines, mind-reading machine, . . . )
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 51 / 57
Looking ahead
∙ We should also think broadly
∙ The founder of our field worked in many areas (Sloan–Wyner 1993)
∙ There are exciting opportunities in new types of networks:
é Smart grids
é Nano and quantum computing
é Biological networks
é Social networks
é Economic networks
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 51 / 57
Looking ahead
∙ We should also think broadly
∙ The founder of our field worked in many areas (Sloan–Wyner 1993)
∙ There are exciting opportunities in new types of networks:
é Smart grids
é Nano and quantum computing
é Biological networks
é Social networks
é Economic networks
∙ In deciding on problems to work on, should take guidance from Shannon:
I am very seldom interested in applications. I am more interested in the
elegance of a problem. Is it a good problem, an interesting problem?
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 51 / 57
Acknowledgments
∙ Young-Han, Alon, Bernd, John, Pulkit for feedback on presentation
∙ DARPA and NSF for funding
∙ IT Society for welcoming me back after many years of absence
Dedication
Suzanne, Amin, Abrahim, and Ashraf
Dr. Amin El Gamal
Tom Cover
Thank You!
References
Ahlswede, R. (1974). The capacity region of a channel with two senders and two receivers. Ann. Probability, 2(5), 805–814.
Ahlswede, R., Cai, N., Li, S.-Y. R., and Yeung, R. W. (2000). Network information flow. IEEE Trans. Inf. Theory, 46(4),1204–1216.
Aleksic, M., Razaghi, P., and Yu, W. (2009). Capacity of a class of modulo-sum relay channels. IEEE Trans. Inf. Theory,55(3), 921–930.
Aref, M. R. (1980). Information flow in relay networks. Ph.D. thesis, Stanford University, Stanford, CA.
Avestimehr, A. S., Diggavi, S. N., and Tse, D. N. C. (2011). Wireless network information flow: A deterministic approach.IEEE Trans. Inf. Theory, 57(4), 1872–1905.
Cover, T. and Gopinath, B. (1987). Open Problems in Communication and Computation. Springer-Verlag.
Cover, T. M. and EG, A. (1979). Capacity theorems for the relay channel. IEEE Trans. Inf. Theory, 25(5), 572–584.
Cover, T. M. and Kim, Y.-H. (2007). Capacity of a class of deterministic relay channels. In Proc. IEEE Int. Symp. Inf.
Theory, Nice, France, pp. 591–595.
Dana, A. F., Gowaikar, R., Palanki, R., Hassibi, B., and Effros, M. (2006). Capacity of wireless erasure networks. IEEE Trans.
Inf. Theory, 52(3), 789–804.
EG, A. (1981). On information flow in relay networks. In Proc. IEEE National Telecomm. Conf., vol. 2, pp. D4.1.1–D4.1.4.New Orleans, LA.
EG, A. and Aref, M. R. (1982). The capacity of the semideterministic relay channel. IEEE Trans. Inf. Theory, 28(3), 536.
EG, A. and Costa, M. H. M. (1982). The capacity region of a class of deterministic interference channels. IEEE Trans. Inf.
Theory, 28(2), 343–346.
EG, A., Greene, J., and Pang, K. (1984). Vlsi complexity of coding. In Proc. of the MIT Conference on Advanced Research in
VLSI, Cambridge, MA, pp. 150–158.
EG, A., Mohseni, M., and Zahedi, S. (2006). Bounds on capacity and minimum energy-per-bit for AWGN relay channels. IEEE
Trans. Inf. Theory, 52(4), 1545–1561.
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 55 / 57
References (cont.)
EG, A. and Orlitsky, A. (1984). Interactive data compression. In Proc. 25th Ann. Symp. Found. Comput. Sci., WashingtonDC, pp. 100–108.
EG, A. and Zahedi, S. (2005). Capacity of a class of relay channels with orthogonal components. IEEE Trans. Inf. Theory,51(5), 1815–1817.
Elias, P., Feinstein, A., and Shannon, C. E. (1956). A note on the maximum flow through a network. IRE Trans. Inf. Theory,2(4), 117–119.
Etkin, R., Tse, D. N. C., and Wang, H. (2008). Gaussian interference channel capacity to within one bit. IEEE Trans. Inf.
Theory, 54(12), 5534–5562.
Ford, L. R., Jr. and Fulkerson, D. R. (1956). Maximal flow through a network. Canad. J. Math., 8(3), 399–404.
Gallager, R. (1988). Finding parity in a simple broadcast network. IEEE Trans. Inf. Theory, 34(2), 176–180.
Goyal, N., Kindler, G., and Saks, M. (2008). Lower bounds for the noisy broadcast problem. SIAM J. Comput., 1806–1841.
Greene, J. and EG, A. (1984). Configuration of vlsi arrays in the presence of defects. Journal of the Association for Computing
Machinery, 31(4), 694–717.
Grover, P., Goldsmith, A., and Sahai, A. (2012). Pulkit grover, andrea goldsmith, and anant sahai. In Proc. IEEE Int. Symp.
Inf. Theory, MIT, MA.
Han, T. S. and Kobayashi, K. (1981). A new achievable rate region for the interference channel. IEEE Trans. Inf. Theory,27(1), 49–60.
Kuznetsov, A. V. and Tsybakov, B. S. (1974). Coding in a memory with defective cells. Probl. Inf. Transm., 10(2), 52–60.
Lim, S. H., Kim, Y.-H., EG, A., and Chung, S.-Y. (2011). Noisy network coding. IEEE Trans. Inf. Theory, 57(5), 3132–3152.
Mead, C. and Conway, L. (1980). Introduction to VLSI Systems. Philippines.
Orlitsky, A. and EG, A. (1990). Average and randomized communication complexity. IEEE Trans. Inf. Theory, 36(1), 3–16.
Orlitsky, A. and Roche, J. R. (2001). Coding for computing. IEEE Trans. Inf. Theory, 47(3), 903–917.
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 56 / 57
References (cont.)
Pang, K. and EG, A. (1986). Communication complexity of computing the hamming distance. SIAM J. Comput., 15(4),932–947.
Ratnakar, N. and Kramer, G. (2006). The multicast capacity of deterministic relay networks with no interference. IEEE Trans.
Inf. Theory, 52(6), 2425–2432.
Savage (1971). Complexity of decoders: Ii – computational work and decoding time. IEEE Trans. Inf. Theory, 17(1), 77–85.
Sloan, N. J. and Wyner, A. D. e. (1993). Claude Shannon: Collected papers. IEEE Press, New Jersey.
Telatar, I. E. and Tse, D. N. C. (2007). Bounds on the capacity region of a class of interference channels. In Proc. IEEE Int.
Symp. Inf. Theory, Nice, France, pp. 2871–2874.
Thompson, C. (1980). A Complexity Theory for VLSI. Ph.D. thesis, Carnegie–Mellon University, Pittsburgh, PA.
van der Meulen, E. C. (1971). Three-terminal communication channels. Adv. Appl. Probab., 3(1), 120–154.
Yao, A. (1997). On the complexity of communication under noise. In 5th ITCS.
Yao, A. C.-C. (1979). Some complexity questions related to distributive computing. In Proc. 11th Ann. ACM Symp. Theory
Comput., Atlanta, Georgia, pp. 209–213.
El Gamal (Stanford University) Looking ahead Shannon Lecture, ISIT 2012 57 / 57