32
Chapter 3 Process Scheduling Bernard Chen Spring 2007

Chapter 3 Process Scheduling Bernard Chen Spring 2007

Embed Size (px)

Citation preview

Page 1: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Chapter 3 Process Scheduling

Bernard ChenSpring 2007

Page 2: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Schedulers Long-term scheduler (or job scheduler) –selects which

processes should be brought into the ready queue

Short-term scheduler (or CPU scheduler) –selects which

process should be executed next and allocates CPU

Page 3: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Schedulers

On some systems, the long-term scheduler maybe absent or minimal

Just simply put every new process in memory for short-term scheduler

The stability depends on physical limitation or self-adjustment nature of human users

Page 4: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Schedulers

Sometimes it can be advantage to remove process from memory and thus decrease the degree of multiprogrammimg

This scheme is called swapping

Page 5: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Addition of Medium Term Scheduling

Page 6: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Share Memory Parallelization System Example

m_set_procs(number): prepare number of child for execution

m_fork(function): childes execute “function”

m_kill_procs(); terminate childs

Page 7: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Real Examplemain(argc , argv){int nprocs=9;m_set_procs(nprocs); /* prepare to launch this many processes */m_fork(slaveproc); /* fork out processes */m_kill_procs(); /* kill activated processes */}

void slaveproc(){ int id; id = m_get_myid();

m_lock(); printf(" Hello world from process %d\n",id); printf(" 2nd line: Hello world from process %d\n",id);

m_unlock();}

Page 8: Chapter 3 Process Scheduling Bernard Chen Spring 2007

#include "stdio.h" #include "sys/types.h" #include "sys/times.h" #include "ulocks.h" #include "sys/param.h" #include "math.h"

#define MAXTHRDS 6

Page 9: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Real Exampleint array_size=1000int global_array[array_size]

main(argc , argv){int nprocs=4;m_set_procs(nprocs); /* prepare to launch this many processes */m_fork(sum); /* fork out processes */m_kill_procs(); /* kill activated processes */}

void sum(){ int id; id = m_get_myid();

for (i=id*(array_size/nprocs); i<(id+1)*(array_size/nprocs); i++)global_array[id*array_size/nprocs]+=global_array[i];

}

Page 10: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Cooperating Processes

Independentprocess cannot affect or be affected by the execution of another process

Cooperatingprocess can affect or be affected by the execution of another process

Advantages of process cooperation Information sharing Computation speed-up Modularity Convenience

Page 11: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Interprocess Cpmmunication (IPC)

Two fundamental models (1) Share Memory(2) Message Passing

Page 12: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Communication Models (a): MPI (b): Share memory

Page 13: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Shared-Memory Systems

Consider the producer-consumer problem

A producer process produce information that is consumed by customer process, for example, a web server produces HTML files and images which are consumed by client web browser

Page 14: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Shared-Memory Systems

The producer and consumer must be synchronized, so that consumer does not try to consume an item that has not yet been produced.

Two types of buffer can be used:1. Unbounded buffer2. Bounded buffer

Page 15: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Shared-Memory Systems

Unbounded Buffer: the consumer may have to wait for new items, but producer can always produce new items.

Bounded Buffer: the consumer have to wait if buffer is empty, the producer have to wait if buffer is full

Page 16: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Bounded Buffer#define BUFFER_SIZE 6

Typedefstruct{. . .} item;

item buffer[BUFFER_SIZE];intin = 0;intout = 0;

Page 17: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Bounded Buffer (producer iew)

while (true) {/* Produce an item */while (((in = (in + 1) % BUFFER SIZE count) ==

out) ; /* do nothing --no free buffers */

buffer[in] = item;in = (in + 1) % BUFFER SIZE;}

Page 18: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Bounded Buffer (Consumer view)

while (true) {while (in == out) ; // do nothing --nothing to consume // until remove an item from the bufferitem = buffer[out];out = (out + 1) % BUFFER SIZE;return item;}

Page 19: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

A message passing facility provides at least two operations: send(message), receive(message)

Page 20: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

If 2 processes want to communicate, a communication link must exist

It has the following variations:1. Direct or indirect communication2. Synchronize or asynchronize

communication3. Automatic or explicit buffering

Page 21: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems Direct communication send(P, message) receive(Q, message)

Properties: A link is established automatically A link is associated with exactly 2 processes Between each pair, there exists exactly one

link

Page 22: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

Indirect communication: the messages are sent to and received from mailbox

send(A, message) receive(A, message)

Page 23: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

Properties: A link is established only if both

members of the pair have a shared mailbox

A link is associated with more than 2 processes

Between each pair, there exists a number of links

Page 24: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

Mailbox sharing P1, P2, andP3 share mailbox A P1, sends; P2 andP3 receive Who gets the message?

Solutions Allow a link to be associated with at most two processes Allow only one process at a time to execute a receive

operation Allow the system to select arbitrarily the receiver.

Sender is notified who the receiver was.

Page 25: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

If the mailbox owned by process, it is easy to tell who is the owner and user.

And there is no confuse we send the message and who receives it.

When process terminates, the mailbox disappear

Page 26: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

If the mailbox owned by OS, it requires the following functions:

Create a new mailbox Send and Receive message

through the mailbox Delete a mailbox

Page 27: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems Synchronization: synchronous and

asynchronous

Blocking is considered synchronous Blocking send has the sender

block until the message is received Blocking receive has the receiver

block until a message is available

Page 28: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems

Non-blockingis considered asynchronous

Non-blocking send has the sender send the message and continue

Non-blocking receive has the receiver receive a valid message or null

Page 29: Chapter 3 Process Scheduling Bernard Chen Spring 2007

Message-Passing Systems Buffering: Queue of messages attached

to the link, there are 3 variations:

Zero capacity –0 messages Sender must wait for receiver 2.Bounded capacity –finite length of n messages, sender must wait if link full 3.Unbounded capacity –infinite length Sender never waits

Page 30: Chapter 3 Process Scheduling Bernard Chen Spring 2007

MPI Program example#include "mpi.h"#include <math.h>#include <stdio.h>#include <stdlib.h>

int main (int argc, char *argv[]){ int id; /* Process rank */ int p; /* Number of processes */ int i,j; int array_size=100; int array[array_size]; /* or *array and then use malloc or vector to increase the

size */ int local_array[array_size/p]; int sum=0; MPI_Status stat; MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p);

Page 31: Chapter 3 Process Scheduling Bernard Chen Spring 2007

MPI Program example if (id==0) { for(i=0; i<array_size; i++) array[i]=i; /* initialize array*/ for(i=0; i<p; i++) MPI_Send(&array[i*array_size/p], /* Start from*/ array_size/p, /* Message size*/ MPI_INT, /* Data type*/ i, /* Send to which process*/ MPI_COMM_WORLD); } else MPI_Recv(&local_array[0],array_size/p,MPI_INT,0,0,MPI_COMM_WORLD,&stat);

Page 32: Chapter 3 Process Scheduling Bernard Chen Spring 2007

MPI Program examplefor(i=0;i<array_size/p;i++) sum+=array[i]; MPI_Reduce (&sum, &sum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (id==0) printf("%d ",sum);

}