Upload
jovany-edsell
View
222
Download
1
Tags:
Embed Size (px)
Citation preview
Week 4 – Random Graphs
Dr. Anthony BonatoRyerson University
AM8002Fall 2014
Complex Networks 2
Random graphs
Paul Erdős Alfred Rényi
Complex Networks 3
Complex Networks 5
G(n,p) random graph model(Erdős, Rényi, 63)
• p = p(n) a real number in (0,1), n a positive integer
• G(n,p): probability space on graphs with nodes {1,…,n}, two nodes joined independently and with probability p
51 2 3 4
Formal definition
• n a positive integer p a real number in [0,1]• G(n,p) is a probability space on labelled graphs
with vertex set V = [n] = {1,2,…,n} such that
• NB: p can be a function of n– today, p is a constant
)|(|2)|(| )1()Pr(
GEn
GE ppG
Properties of G(n,p)
• consider some graph G in G(n,p)• the graph G could be any n-vertex graph, so not
much can be said about G with certainty
• some properties of G, however, are likely to hold
• we are interested in properties that occur with high probability when n is large
A.a.s.
• an event An happens asymptotically almost surely (a.a.s.) in G(n,p) if it holds there with probability tending to 1 as n→∞
Theorem 4.1. A.a.s. G in G(n,p) is diameter 2.
• just say: A.a.s. G(n,p) has diameter 2.
First moment method
• in G(n,p), all graph parameters:
|E(G)|, γ(G), ω(G), …
become random variables
• we focus on computing the averages of these parameters or expectation
Discussion
Calculate the expected number of edges in G(n,p).
• use of expectation when studying random graphs is sometimes referred to as the first moment method
11
Degrees and diameter
Theorem 4.2: A.a.s. the degree of each vertex of G in G(n,p) equals
• concentration: binomial distribution
pnonpnOpn ))1(1()log(
Markov’s inequality
Theorem 4.3 (Markov’s inequality)
For any non-negative random variable X and t > 0, we have that
./][]Pr[ tXEtX
Chernoff bound
Theorem 4.4 (Chernoff bound)
Let X be a binomially distributed random variable on G(n,p) with E[x] = np. Then for ε ≤ 3/2 we have that
]).[3
exp(2]][||][Pr[|2
XEXEXEX
Martingales
• let X and Y be random variables on the same probability space
• the conditional mass function of X given Y = y is defined by
fx|y(x|y)=Pr[X=x | Y=y]
• note that for a fixed y, fx|y(x|y) is a function of x
• the conditional expection of X when Y=y is given by its expectation
• let g(x) = E[X | Y=y]; g is the conditional expectation of X on Y, written E[X|Y]
x
yx yxxfyYXE ),(]|[ |
Intuition
• E[X|Y] is the expected value of X assuming Y is known
• note that E[X|Y] is a random variable– precise value depends on the value of Y
Definition
• a martingale is a sequence (X0,X1,...,Xt) of random variables over a given probabiltiy space such that for all i > 0,
E[Xi| X0,X1,...,Xi-1] = Xi-1
Example
• a gambler starts with $100• she flips a fair coin t times; when the coin
is heads, she wins $1; tails, she loses $1.• let Xi denote the gamblers bankroll after i
flips• then (X0,X1,...,Xt) is a martingale, since:
E[Xi | X0,X1,...,Xi-1] = 1/2(Xi-1+1)+1/2(Xi-1-1)
= Xi
Doob martingales
• let A, Z1,..., Zt be random variables
• define X0 = E[A], Xi = E[A| Z1,..., Zi ] for 1 ≤ i ≤ t
• can be shown that (X0,X1,...,Xt) is a martingale; called the Doob martingale
• Idea: A = f(Z1,..., Zt ) is some function f, with X0 = E[A] and Xt = A
• each Zi is “revealed” more and more until we know everything and hence, A
Azuma-Hoeffding inequality
Theorem 4.5 Let (X0,X1,...,Xt) be a martingale such that |Xi+1 – Xi| ≤ c for all i (c-Lipschitz condition).
Then for all λ > 0,
• concentration inequality
).2/exp(2]|Pr[| 220 ctXX t
Example: vertex colouring
• let A = χ(G(n,p)), and let Zi contains the information on the presence/absence of edges ij with j < i
• Doob martingale here is called the vertex-exposure martingale– reveal one vertex at a time
Concentration of chromatic number
Theorem 4.6 For G in G(n,p) and all real λ >0,
• hence, χ(G(n,p)) is concentrated around its expectation; proved before anyone knew E(χ(G(n,p)))!
).2/exp(2]|)]([)(Pr[| 2 nGEG
Complex Networks 23
Aside: evolution of G(n,p)
• think of G(n,p) as evolving from a co-clique to clique as p increases from 0 to 1
• at p=1/n, Erdős and Rényi observed something interesting happens a.a.s.:– with p = c/n, with c < 1, the graph is disconnected with all
components trees, the largest of order Θ(log(n))– as p = c/n, with c > 1, the graph becomes connected with a giant
component of order Θ(n)
• Erdős and Rényi called this the double jump• physicists call it the phase transition: it is similar to
phenomena like freezing or boiling
Complex Networks 24