12
1 | Unit -5 Need for pre –emptive OS In computing, pre-emption is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at later time. Such a change is known as context switch. It is normally carried out by a privileged task or part of the system known as pre-emptive scheduler , which has the power to preempt, or interrupt and later resume other task in the system. Preemptive Multitasking It involves the use of the interrupt mechanism, which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore all processes will get some amount of CPU time at any given time. In general pre-emption means “prior seizure of”. When the high priority task at that instance seizes the currently running task, it is known as preemptive scheduling. Preemptive multitasking allows the computer system to more reliably guarantee each process a regular slice of operating system. Threads The thread is the basic unit of the CPU utilization; it comprises of thread id, a program counter, register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating system resources, such as open files and signal. Thread Model What threads add to the process model is to allow multiple executions to take place in the same process environment, to a large degree independent of one another. Having multiple threads running in parallel in one process is analogous to having multiple processes running in parallel in one computer. Because threads have some of the properties of processes, they are sometimes called lightweight processes. The term multithreading is also used to describe the situation of allowing multiple threads in the same process.

Operating System Issues and Applications

Embed Size (px)

DESCRIPTION

Operating System issues for multiprocessing Need for pre-emptive OS; SchedulingTechniques, Usual OS scheduling techniques, Threads, Distributed scheduler, Multiprocessorscheduling, Gang scheduling; Communication between processes, Message boxes, Sharedmemory; Sharing issues and Synchronization, Sharing memory and other structures, SharingI/O devices

Citation preview

Page 1: Operating System Issues and Applications

1 | U n i t - 5

Need for pre –emptive OS

In computing, pre-emption is the act of temporarily interrupting a task being carried out by a

computer system, without requiring its cooperation, and with the intention of resuming the

task at later time. Such a change is known as context switch. It is normally carried out by a

privileged task or part of the system known as pre-emptive scheduler, which has the power

to preempt, or interrupt and later resume other task in the system.

Preemptive Multitasking

It involves the use of the interrupt mechanism, which suspends the currently executing

process and invokes a scheduler to determine which process should execute next. Therefore

all processes will get some amount of CPU time at any given time.

In general pre-emption means “prior seizure of”. When the high priority task at that instance

seizes the currently running task, it is known as preemptive scheduling.

Preemptive multitasking allows the computer system to more reliably guarantee each

process a regular slice of operating system.

Threads

The thread is the basic unit of the CPU utilization; it comprises of thread id, a program

counter, register set, and a stack. It shares with other threads belonging to the same process

its code section, data section, and other operating system resources, such as open files and

signal.

Thread Model

What threads add to the process model is to allow multiple executions to take place in the

same process environment, to a large degree independent of one another. Having multiple

threads running in parallel in one process is analogous to having multiple processes running

in parallel in one computer. Because threads have some of the properties of processes, they

are sometimes called lightweight processes.

The term multithreading is also used to describe the situation of allowing multiple threads

in the same process.

Page 2: Operating System Issues and Applications

2 | U n i t - 5

In Fig. 2-6(a) we see three traditional processes. Each process has its own address space and

a single thread of control. In contrast, in Fig. 2-6(b) we see a single process with three

threads of control. Although in both cases we have three threads, in Fig. 2-6(a) each of them

operates in a different address space, whereas in Fig. 2-6(b) all three of them share the

same address space.

The CPU switches rapidly back and forth among the threads providing the illusion that the

threads are running in parallel, albeit on a slower CPU than the real one. All threads have

exactly the same address space, which means that they also share the same global variables.

Since every thread can access every memory address within the process’ address space, one

thread can read, write, or even completely wipe out another thread’s stack. There is no

protection between threads because (1) it is impossible, and (2) it should not be necessary.

Like a traditional process (i.e., a process with only one thread), a thread can be in any one of

several states: running, blocked, ready, or terminated. It is important to realize that each

thread has its own stack, as shown in Fig.

A thread has stages: thread_create, thread_wait and thread_yield, which allows a thread to

voluntarily give up the CPU to let another thread run.

Page 3: Operating System Issues and Applications

3 | U n i t - 5

Threads Usage

Need for Threads

1. Instead thinking about interrupts, timers, and context switches, we can think about

parallel processes. Only now with threads we add a new element: the ability for the

parallel entities to share an address space and all of its data among themselves. 2. They do not have any resources attached to them, they are easier to create and

destroy than processes. 3. Threads yield no performance gain when all of them are CPU bound.

Threads being used in a word processor

A multithreaded web server

Page 4: Operating System Issues and Applications

4 | U n i t - 5

(a) A user-level threads package. (b) A threads package managed by the kernel.

Gang Scheduling

In Computer science, Gang scheduling is a scheduling algorithm for parallel systems that

schedules related threads or processes to run simultaneously on different processors.

Usually these will be threads all belonging to the same process, but they may also be from

different processes, for example when the processes have a producer-consumer

relationship, or when they all come from the same MPI program.

Gang scheduling is used so that if two or more threads or processes communicate with each

other, they will all be ready to communicate at the same time. If they were not gang-

scheduled, then one could wait to send or receive a message to another while it is sleeping,

and vice-versa. When processors are over-subscribed and gang scheduling is not used within

a group of processes or threads which communicate with each other, it can lead to

situations where each communication event suffers the overhead of a context switch.

Technically, gang scheduling is based on a data structure called the Ousterhout matrix. In

this matrix each row represents a time slice, and each column a processor. The threads or

processes of each job are packed into a single row of the matrix. During execution,

coordinated context switching is performed across all nodes to switch from the processes in

one row to those in the next row.

Need of Gang Scheduling

Barrier Synchronization: Barrier synchronization means waiting for the completion of all

processes that have been set up. In Fig. 1, parallelization is performed for one loop and a

barrier is used to wait for its completion.

Page 5: Operating System Issues and Applications

5 | U n i t - 5

Fig 1.

As shown in Fig. 2, if there exists even one process that causes a CPU’s allocated timing to

be delayed, the time needed for the whole processing job is lengthened exactly to the

extent that the other processes wait at the barrier for the delayed process.

Fig 2.

The Gang scheduling function has the function of simultaneously allocating the required

number of CPUs when scheduling parallel programs, and allows you to obtain almost the

same performance when multiple parallel programs are simultaneously executing, as if the

programs were running alone.

Characteristics of Gang Scheduling

Overhead As shown in Fig. 3, the method used is to replace the executing parallel programs at fixed

time intervals (each second), so Gang scheduling imposes extremely low overhead.

Operability Settings are made for each resource block (RB) to determine whether Gang scheduling is to

be performed for it, and the number of CPUs (parallelism) to be allocated to a parallel -

Page 6: Operating System Issues and Applications

6 | U n i t - 5

program is set as an NQS queue parameter. Thus Gang scheduling can be introduced without requiring big changes in existing settings. To keep tuning straightforward, the

meanings of RB parameters such as Icpu, Max, priority, and time slice are designed to be the same as for an ordinary program.

Multi-node operation In a multi-node system, simply by making Gang scheduling settings in RBs that execute

distributed parallel programs, you cause internodes synchronization to take place. Node

shutdowns and restarts are also automatically taken care of, so you don’t need to enter

commands.

CONCLUSION As explained above, through introduction of the Gang scheduling function, you significantly

increase your operational freedom and make possible efficient operation of parallel

programs.

Scheduling Techniques (Refer to page 157 of Galvin)

Multiple Processor Scheduling

If multiple CPUs are available, the scheduling problem is correspondingly more complex.

Consider a system with an I/O device attached to a private bus of one processor. Process

wishing to use that device must be scheduled to run on that processor; otherwise the device

would not be available.

If several identical processors are available, then load sharing can occur. In this case,

however, one processor could be idle, with an empty queue, while another processor was

very busy. To prevent this situation we use a common ready queue. All processes go into

one queue and are scheduled onto any available processor.

The other approach avoids the problem by appointing one processor as scheduler for the

other processor, thus creating a master –slave structure.

Some systems carry this structure one step further, by having all scheduling decisions, I/O

processing, and other system activities handled by on single processor –Master server. The

other processors only execute user code. This asymmetric multiprocessing is far simpler

than symmetric multiprocessing, because only one processor accesses the system data

structure, alleviating the need for data sharing.

Page 7: Operating System Issues and Applications

7 | U n i t - 5

Real Time Scheduling

Real time computing is divided into two types:

1. Hard real –time systems

2. Soft real –time systems

Hard real –time systems are required to complete a critical task within a guaranteed

amount of time. A scheduler is needed for that. The scheduler then either admits the

process, guaranteeing that the process will complete on time, or rejects the request as

impossible. This is known as resource reservation.

Soft –real time system: In this firstly the system must have high priority scheduling, and real

time processes must have the highest priority. Secondly, the dispatch latency must be small.

To keep dispatch latency low, we need to allow system calls to be preemptible. Preemption

points can be placed only at only “safe” locations in the kernel –only where kernel data

structures are not being modified. Another method for dealing with preemption is to make

entire kernel preemptible.

Q1. But what happens if high –priority process needs to read or modify kernel data

currently being accessed by another, low priority process?

Ans. The high priority process would be waiting for a low priority one to finish. This situation

is known as priority inversion.

Communication between Processes or Interprocess

communication (IPC) (Refer to page 110 of Galvin)

IPC provides a mechanism to allow processes to communicate and to synchronize their

actions without sharing the same address space. An example is chat program used on the

WWW.

Message Passing System

In this the services they operate outside the kernel. Communication among the user

processes is accomplished through the passing of messages. An IPC facility provides two

operations: send (message) and receive (message).

If process P and Q wants to communicate, they must send messages to and receive

messages from each other; a communication link must exist between them.

Several method for logically implementing a link and send/receive operations

• Direct or Indirect communication

Page 8: Operating System Issues and Applications

8 | U n i t - 5

• Symmetric or asymmetric communication

• Automatic or Explicit communication

• Send by copy or send by reference.

• Fixed size or variable size messages.

Naming

Processes use either direct or indirect communication.

Direct Communication

• Send (P, message) –Send a message to process P

• Receive (Q, message) – Receive a message from process Q.

A communication link has following properties

• Link is associated with exactly two processes.

• Exactly one link exists between each pair of processes.

Indirect Communication

With indirect communication, the messages are sent to and received from mailboxes, or

ports.

• Send (A, message) –Send a message to mailbox A.

• Receive (A, message) –Receive a message from mailbox A.

A communication link has following properties

• The pair has a shared mailbox.

• Link can be associated with more than two processes.

• Different links may exist between each pair of communicating processes, with each

link corresponding to one mailbox.

If mailbox is owned by a process (that is, the mailbox is a part of address space of the

process).

If the mailbox is owned by an OS, then it provides a mechanism:

• Create a new mailbox

• Send and receive messages through the mailbox

• Delete a mailbox

Page 9: Operating System Issues and Applications

9 | U n i t - 5

SYNCHRONIZATION

Message passing can be either by blocking or non –blocking.

Buffering

Basically a queue can be implemented in three ways:

Page 10: Operating System Issues and Applications

10 | U n i t - 5

Shared Memory Systems

Page 11: Operating System Issues and Applications

11 | U n i t - 5

Page 12: Operating System Issues and Applications

12 | U n i t - 5

Bibliography

1. Galvin of Operating Systems

2. Tannenbaum of Operating System