Studying Different Problems from Distributed Computing Several of these problems are motivated by...

Preview:

Citation preview

Studying Different Problems from Distributed Computing

• Several of these problems are motivated by trying to use solutiions used in `centralized computing’ to distributed computing

Mutual Exclusion

Problem statement:

Given a set of n processes, and a shared resource, it is required that:– Mutual exclusion

• At any time, at most one process is accessing the resource

– Liveness • If a process requests for the resource, it can

eventually access the resource

Solution to mutual exclusion

• How could we do this if all processes shared a common clock– Each process timestamps its request– The process with lowest timestamp is

allowed to access critical section

• What are the properties of clocks that enable us to solve this problem?

Problem

• Logical Clocks could assign the same value to different events– Need to order these events

Logical Timestamps

• The time associated with an event is a pair, the clock and the process where the event occurred.

• For event a at process j, the timestamp ts.a is– ts.a = <cl.a, j>

• cl.a = clock value assigned by logical clock

Lexicographical comparison< x1, x2 > < <y1, y2>iff x1 < y1 ( (x1 = y1) (x2 < y2) )

Observation about Logical Clocks

• For any two distinct events a and b, either

• ts.a < ts.b ts.b < ts.a

• The event timestamps form a total order.

Assumption

• Communication is FIFO

Solution to mutual exclusion, based on logical clocks

• Messages are timestamped with logical clocks

• Each process maintains a queue of pending requests

• When process j wants to access the resource, it adds its timestamp to the queue, and sends a request message containing its timestamp to all other processes

• When process k receives a request message from j, it adds j to the queue and sends a reply message to j

Solution to mutual exclusion, based on logical clocks (continued)

• Process j accesses the resource (enters critical section) iff– it has received a reply from every other process– its queue does not contain a timestamp that is

smaller than its own request

• After a process is done accessing its critical section, it sends a release message to all processes and removes its own request from the pending queue

• When a process k receives the release message from j,it removes the entry of j from its pending queue

Solution to mutual exclusion, based on logical clocks (continued)

• This is called Lamport’s mutual exclusion algorithm

• What is the number of messages sent for every access to critical section?

Correctness Argument

• Consider each of these 3 situations– req(j) req(k)– req(k) req(j)– req(j) || req(k)

– Show that in each of these conditions, process with smaller timestamp enters CS first.

• Suppose j and k request for CS simultaneously – Assume that j’s request is satisfied first– After j releases CS, it requests again

immediately.

• Show that j’s second request cannot satisfied before k’s first request

Optimizations

• Should a process wait for a reply message from every other process?

• Should a process send a reply message immediately?

• Answer these questions to obtain a protocol where only 2 (n-1) messages are used for each critical section

Optimizations

• Should a process wait for a reply message from every other process?– If timestamp of j’s request is larger than k’s

timestamp• How can k learn that j’s request timestamp is

larger?

Optimizations

• Should a process send a reply message immediately?– k receives a request from j

• k is requesting– timestamp of j is larger

» No need to send reply right away since j has to wait until k access its critical section first

» Fine to delay reply until k finishes its critical section

– timestamp of k is larger

• k is not requesting

Optimization

• Release message– Should we send it to all?

• No. but send it only to those for whom you have pending requests

Optimizatons make sure

• Either a reply message is sent or a release message is sent but not both

Related Problem

• Atomic Broadcast– Assume all messages are broadcast in

nature– If m1 is delivered before m2 at process j

then • m1 is delivered before m2 at process k

Relation between Atomic Broadcast and Mutual Exclusion

• Atomic broadcast -> Mutual exclusion – Every process sends request to all– You can access the resource when you

receive your own message and you know that previous requests have been met

• Mutual Exclusion -> Atomic Broadcast– When you want to broadcast: req for ME– Upon access to CS: send the message to

be broadcast and wait for ack– Release critical section

What other Clocks Can We Use?

• Local Counters?

• Vector Clocks?

Classification of Mutual Exclusion Algorithms

• Quorum Based– Each node is associated with a quorum Qj

– When j wants to enter critical section, it asks for permission from all nodes in this quorum

– What property should be met by the quorums of different processes?

• Token Based– A token is circulated among nodes; the node that

has the token can access critical section– We will look at these later

Classification of Mutual Exclusion Algorithms

• Which category would Lamport’s protocol fit in?

• What is the quorum of a process in this algorithm?

• What are the possibilities of different quorums

Taking Quorum Based Algorithms to Extreme

• Centralized mutual exclusion– A single process `coordinator' is responsible for

ensuring mutual exclusion.– Each process requests the coordinator whenever

it wishes to access the resource.– The coordinator permits only one process to

access the resource at a time.– After a process accesses the resource, it sends a

reply to the coordinator.

• Quorum for all processes is {c} where c is the coordinator

Centralized mutual exclusion

• Problem : What if the coordinator fails?

• Solution : Elect a new one – Related problem: leader election

Other Criteria for Mutual Exclusion

• Let T be transmission delay of a message• Let E be time for critical section execution

• What is the minimum (maximum delay) between one process exiting critical section and another process entering it?

• What is maximum throughput, I.e., number of processes that can enter CS in a given time?

Criteria for Mutual Exclusion

• Min Delay for any protocol• Max throughput for any protocol

• Lamport– Delay? T– Throughput? 1/(E+T)

• Centralized– Delay? 2T– Throughput? 1/(E +2T)

Quorum Based Algorithms

• Each process j requests permission from its quorum Qj

– Requirement: j, k :: Qj Qk j, k :: Qj Qk

– Rj = set of processes that request permission from j

• Rj need not be the same as Qj

• It is desirable that the size of Rj is same/similar for all processes

For Centralized Mutual Exclusion

• | Qj | = 1

• | Rj | = 0 j!= c

• n j = c?– Shows the unbalanced nature of

centralized mutual exclusion

• Goal: Reduce | Qj | while keeping | Rj | balanced for all nodes

Quorum Based Algorithms

• Solution for | Qj | = O(N )

– Grid based

Maekawa’s algorithm• Maekawa showed that minimum quorum size is

N• example quorums:

– for 3 processes: Q0={P0,P1}, Q1={P1,P2}, Q2={P0,P2}

– for 7 processes: Q0={P0,P1 ,P2}, Q3={P0,P3 ,P4}, Q5={P0,P5 ,P6},

Q1={P1,P3 ,P5}, Q4={P1,P4 ,P6}, Q6={P2,P3 ,P6},

Q2={P2,P4 ,P5}

• For n2 - n + 1 processes, quorums of size n can be constructed

Basic operation• Requesting CS

– process requests CS by sending request message to processes in its quorum

– a process has just one permission to give, if a process receives a request it sends back reply unless it granted permission to other process; in which case the request is queued

• Entering CS– process may enter CS when it receives replys from all

processes in its quorum• Releasing CS

– after exiting CS process sends release to every process in its quorum

– when a process gets release it sends reply to another request in its queue

Possible Deadlock

• Since processes do not communicate with all other processes in the system, CS requests may be granted out of timestamp order

• example:

– suppose there are processes Pi, Pj, and Pk such that:Pj Qi and Pj Qk but Pk Qi and Pi Qk

– Pi and Pk request CS such that tsk < tsi

– if request Pi from reaches Pj first, then Pj sends reply to Pi and Pk has to wait for Pi out of timestamp order

– a wait-for cycle (hence a deadlock) may be formed

Maekawa’s algorithm, deadlock avoidance

• To avoid deadlock process recalls permission if it is granted out of timestamp order

– if Pj receives a request from Pi with higher timestamp than the request granted permission, Pj sends failed to Pi

– If Pj receives a request from Pi with lower timestamp than the request granted permission (deadlock possibility), Pj sends inquire to the process whom it had given permission before

– when Pi receives inquire it replies with yield if it did not succeed getting permissions from other processes

• got failed

Maekawa Algorithm

• Number of messages

• Min Delay: 2T

• Max Throughput 1/(E+2T)

Faults in Maekawa’s algorithm

• What will happen if faults occur in Maekwa algorithm?

• What can a process do if some process in its quorum has failed?

• When will mutual exclusion be impossible in Maekawa algorithm?

Tree Based Mutual Exclusion

• Suppose processes are organized in a tree– What are possible quorums?

• A path from the root to the leaf• Root is part of all quorums• Can we construct more quorums?

Tree Based Quorum Based Mutual Exclusion

• Number of messages

• Min Delay

• Max Throughput

Token-based algorithms

• LeLann’s token ring• Suzuki-Kasami’s

broadcast• Raymond’s tree

Token-ring algorithm (Le Lann)• Processes are arranged in a logical ring• At start, process 0 is given a token

– Token circulates around the ring in a fixed direction via point-to-point messages

– When a process acquires the token, it has the right to enter the critical section

• After exiting CS, it passes the token on

• Evaluation:– N–1 messages required to enter CS– Not difficult to add new processes to ring– With unidirectional ring, mutual exclusion is fair, and

no process starves– Difficult to detect when token is lost– Doesn’t guarantee “happened-before” order of entry

into critical section

Token-ring algorithm

• Number of messages

• Min Delay

• Max Throughput

Suzuki-Kasami’s broadcast algorithm • Overview:

– If a process wants to enter the critical section, and it does not have the token, it broadcasts a request message to all other processes in the system

– The process that has the token will then send it to the requesting process• However, if it is in CS, it gets to finish

before sending the token– A process holding the token can

continuously enter the critical section until the token is requested

Suzuki-Kasami’s broadcast algorithm

– Request vector at process i :• RNi [k] contains the largest sequence

number received from process k in a request message

– Token consists of vector and a queue:• LN[k] contains the sequence number of

the latest executed request from process k

• Q is the queue of requesting process

Suzuki-Kasami’s broadcast algorithm

• Requesting the critical section (CS):– When a process i wants to enter the CS, if it does not

have the token, it:• Increments its sequence number RNi [i]• Sends a request message containing new sequence

number to all processes in the system– When a process k receives the request(i,sn)

message, it:• Sets RNk [i] to MAX(RNk [i], sn)

– If sn < RNk [i], the message is outdated

– If process k has the token and is not in CS (i.e., is not using token),and if RNk [i] == LN[i]+1 (indicating an outstanding request)it sends the token to process i

• Releasing the CS:– When a process i leaves the CS, it:

• Sets LN[i] of the token equal to RNi [i]– Indicates that its request RNi [i] has been executed

• For every process k whose ID is not in the token queue Q, it appends its ID to Q if RNi [k] == LN[k]+1

– Indicates that process k has an outstanding request• If the token queue Q is nonempty after this update, it deletes the

process ID at the head of Q and sends the token to that process– Gives priority to others’ requests– Otherwise, it keeps the token

• Evaluation:– 0 or N messages required to enter CS

• No messages if process holds the token• Otherwise (N-1) requests, 1 reply

– synchronization delay – T

Suzuki-Kasami’s broadcast algorithm

Suzuki-Kasami’s broadcast algorithm

• Executing the CS:– A process enters the CS when it

acquires the token

Raymond’s tree algorithm• Overview:

– processors are arranged as a logical tree

• Edges are directed toward theprocessor that holds the token (called the “holder”, initially the root of tree)

– Each processor has:• A variable holder that points to its neighbor on the

directed path toward the holder of the token• A FIFO queue called request_q that holds its requests

for the token, as well as any requests from neighbors that have requested but haven’t received the token

– If request_q is non-empty, that implies the node has already sent the request at the head of its queue toward the holder

T1

T2 T3

T4 T5 T6 T7

Raymond’s tree algorithm• Requesting the critical section (CS):

– When a process wants to enter the CS, but it does not have the token, it:

• Adds its request to its request_q • If its request_q was empty before the addition, it sends a

request message along the directed path toward the holder

– If the request_q was not empty, it’s already made a request, and has to wait

– When a process in the path between the requesting process and the holder receives the request message, it

• < same as above >

– When the holder receives a request message, it• Sends the token (in a message) toward the requesting

process• Sets its holder variable to point toward that process

(toward the new holder)

Raymond’s tree algorithm• Requesting the CS (cont.):

– When a process in the path between the holder and the requesting process receives the token, it

• Deletes the top entry (the most current requesting process) from its request_q

• Sends the token toward the process referenced by the deleted entry, and sets its holder variable to point toward that process

• If its request_q is not empty after this deletion, it sends a request message along the directed path toward the new holder (pointed to by the updated holder variable)

• Executing the CS:– A process can enter the CS when it receives the token and its own entry is at the top of its request_q

• It deletes the top entry from the request_q, and enters the CS

Raymond’s tree algorithm• Releasing the CS:

– When a process leaves the CS• If its request_q is not empty (meaning a process has

requested the token from it), it:– Deletes the top entry from its request_q – Sends the token toward the process referenced by

the deleted entry, and sets its holder variable to point toward that process

• If its request_q is not empty after this deletion (meaning more than one process has requested the token from it), it sends a request message along the directed path toward the new holder (pointed to by the updated holder variable)

• greedy variant – a process may execute the CS if it has the token even if it is not at the top of the queue. How does this variant affect Raymond’s alg.?

Fault-tolerant Mutual Exclusion

• Based on Raymond’s algorithm

(Abstract) Actions of Raymond Mutual Exclusion

<Upon request>

Request.(h.j) = Request.(h.j) {j}

h.j = k /\ h.k = k /\ j Request.k

h.k = j, h.j = j, Request.k = Request.k – {j}

Actions

h.j = j

Access critical section

Slight modification

h.j = k /\ h.k = k /\ j Request.k /\

(P.j = k \/ P.k = j)

h.k = j, h.j = j, Request.k = Request.k – {j}

Fault-Tolerant Mutual Exclusion

• What happens if the tree is broken due to faults?– A tree correction algorithm could be used

to fix the tree– Example: we considered one such

algorithm before

However,

• Even if the tree is fixed, the holder relation may not be accurate

Invariant for holder relation

• What are the conditions that are always true about holder relation?

Invariant

• h.j {j, P.j} ch.j

• P.j j (h.j = P.j \/ h.(P.j) = j)

• P.j j (h.j = P.j /\ h.(P.j) = j)

• Plus all the predicates in the invariant of the tree program

Recovery from faults

h.j {j, P.j} ch.j

h.j = P.j

Recovery from faults

P.j j /\ (h.j = P.j \/ h.(P.j) = j)

h.j = P.j

Recovery from Faults

P.j j /\ (h.j = P.j /\ h.(P.j) = j)

h.(P.j) = P.(P.j)

Notion of Superposition

Properties of This Mutual Exclusion Algorithm

• Always unique token?• Eventually unique token?• Level of tolerance?

– Nonmasking– Ensure that eventually program recovers to states

from where there is exactly one token that is circulated

– Some changes necessary for masking fault-tolerance –where multiple tokens do not exist during recovery

• We will look at a solution a little later

Recommended