10
1. Explain multi-programmed Batched Operating System. Computer multiprogramming is the allocation of a computer system and its resources to more than one concurrent application, job or userIn multiprogramming, concurrent running (sharing of the processor) is achieved when the operating system identifies opportunities to interrupt the handling of one program between tasks (e.g., when it is waiting for input/output) and to transfer process control to another program (application, job or user). To a great extent, the ability of a system to share its resources equitably—or according to certain priorities—is dependent upon the design of the programs being handled and how frequently they may be interrupted.The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, non-existent and invisible to them.In this context, the root word "program" does not necessarily refer to a compiled application, rather, any set of commands submitted for execution by a user or operator. Such could include a script or job control stream and any included calls to macro-instructions , system utilities or application program modules. An entire, interactive, logged- in user session can be thought of as a "program" in this sense. 2. Discuss different types of schedulers. Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and a short-term scheduler . The names suggest the relative frequency with which these functions are performed. the long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue;

Bt0070 operating systems 1

Embed Size (px)

Citation preview

Page 1: Bt0070 operating systems  1

1. Explain multi-programmed Batched Operating System.

Computer multiprogramming is the allocation of a computer system and its resources to more than one concurrent application, job or userIn multiprogramming, concurrent running (sharing of the processor) is achieved when the operating system identifies opportunities to interrupt the handling of one program between tasks (e.g., when it is waiting for input/output) and to transfer process control to another program (application, job or user). To a great extent, the ability of a system to share its resources equitably—or according to certain priorities—is dependent upon the design of the programs being handled and how frequently they may be interrupted.The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, non-existent and invisible to them.In this context, the root word "program" does not necessarily refer to a compiled application, rather, any set of commands submitted for execution by a user or operator. Such could include a script or job control stream and any included calls to macro-instructions, system utilities or application program modules. An entire, interactive, logged-in user session can be thought of as a "program" in this sense.

2. Discuss different types of schedulers.

Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and a short-term scheduler . The names suggest the relative frequency with which these functions are performed.

the long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue; that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - ie: whether a high or low amount of processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled

The mid-term scheduler temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in").

Page 2: Bt0070 operating systems  1

The mid-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource.

The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU

3. Discuss First Come First Served scheduling algorithm. First-Come-First-Served algorithm is the simplest scheduling

algorithm is the simplest scheduling algorithm. Processes are dispatched according to their arrival time on the ready queue. Being a nonpreemptive discipline, once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and unimportant jobs make important jobs wait. FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good response time. The code for FCFS scheduling  is simple to write and understand. One of the major drawback of this scheme is that the average time is often quite long.

The First-Come-First-Served algorithm is rarely used as a master scheme in modern operating systems but it is often embedded within other schemes.

4. What are semaphores? Explain. In computer science, a semaphore is a protected variable or abstract data type that constitutes a classic method of controlling access by several processes to a common resource in a parallel programming environment. A semaphore generally takes one of two forms: binary and counting. A binary semaphore is a simple "true/false" (locked/unlocked) flag that controls access to a single resource. A counting semaphore is a counter for a set of available resources. Either

Page 3: Bt0070 operating systems  1

semaphore type may be employed to prevent a race condition. On the other hand, a semaphore is of no value in preventing resource deadlock, such as illustrated by the dining philosophers problem.Counting semaphores are accessed using operations similar to the following Pascal examples. Procedure V will increment the semaphore S, whereas procedure P will decrement it:

Semaphores remain in common use in programming languages that do not intrinsically support other forms of synchronization. They are the primitive synchronization mechanism in many operating systems. The trend in programming language development, though, is towards more structured forms of synchronization, such as monitors (though these advanced structures typically employ semaphores behind the scenes). In addition to their inadequacies in dealing with (multi-resource) deadlocks, semaphores do not protect the programmer from the easy mistakes of taking a semaphore that is already held by the same process, and forgetting to release a semaphore that has been taken

5. Write a note on Resource Allocation Graph.Resource allocation graph is a very useful tool that helps in characterizing allocation of resources. Resource allocation graph was introduced by Holt (HOLT72). Resource allocation graph is a directed graph that describes a state of the system of resources as well as process. Each and every resource and process is represented by a node.Deadlock Detection, in the case where it is available information about the full resource allocation graph,

Page 4: Bt0070 operating systems  1

it is easy since there is deadlock if and only if there is a loop in the resource allocation graph.

In the case that the multiplicity of resources is 1, we can simplify the detection of deadlocks by building a wait-for graph, i.e a graph where the nodes represent processes and there is an arc from Pi to Pj if there is a resource R that is held by Pj and requested by Pi. The wait-for graph of a system is always smaller than the resource allocation graph of that same system. There is a deadlock in a system if and only if there is a loop in the wait-for graph of that system.

6. What are the reasons for building distributed system?

Distributed computing is a field of computer science that stud-ies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distrib-uted programming is the process of writing such programs Dis-tributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer Parallel or distributed computing?

The terms "concurrent computing", "parallel computing", and "distrib-uted computing" have a lot of overlap, and no clear distinction exists between them. The same system may be characterized both as "paral-lel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a par-ticular tightly-coupled form of distributed computing, and distributed computing may be seen as a loosely-coupled form of parallel comput-ing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:

In parallel computing, all processors have access to a shared memory. Shared memory can be used to exchange information between processors.

In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors

The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems; see the section Theoret-

Page 5: Bt0070 operating systems  1

ical foundations below for more detailed discussion. Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.

7. Explain with a neat diagram all possible states a process visits during the course of its execution.

In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently

An operating system kernel that allows multi-tasking needs processes to have certain states. Names for these states are not standardised, but they have similar functionality.[1]

First, the process is "created" - it is loaded from a secondary storage device (hard disk or CD-ROM...) into main memory. After that the process scheduler assigns it the state "waiting".

While the process is "waiting" it waits for the scheduler to do a so-called context switch and load the process into the processor. The process state then becomes "running", and the processor ex-ecutes the process instructions.

If a process needs to wait for a resource (wait for user input or file to open ...), it is assigned the "blocked" state. The process

Page 6: Bt0070 operating systems  1

state is changed back to "waiting" when the process no longer

needs to wait.

Once the process finishes execution, or is terminated by the op-erating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory

8. Why are Round-Robin Scheduling algorithm designed for time-sharing systems? Explain.

Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is both simple and easy to im-plement, and starvation-free. Round-robin scheduling can also be ap-plied to other scheduling problems, such as data packet scheduling in computer networks. Round-robin job scheduling may not be desirable if the sizes of the jobs or tasks are highly variable. A process that pro-duces large jobs would be favoured over other processes. This problem may be solved by time-sharing, i.e. by giving each job a time slot or quantum (its allowance of CPU time), and interrupt the job if it is not completed by then. The job is resumed next time a time slot is as-signed to that process.

Page 7: Bt0070 operating systems  1

Example: The time slot could be 100 milliseconds. If job1 takes a total time of 250ms to complete, the round-robin scheduler will suspend the job after 100ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100ms each), job1 will get an-other allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.

9. Define the following terms.

a. Mutual Exclusion Are used in concurrent programming to avoid the simultaneous

use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm which implements mutual exclusion.b. Busy Waiting

is a technique in which a process repeatedly checks to see if a condition is true, such as whether keyboard input is available, or if a lock is available. Spinning can also be used to generate an arbitrary time delay, a technique that was necessary on systems that lacked a method of waiting a specific length of time. On modern computers with widely differing processor speeds, spinning as a time delay technique often produces unpredictable results unless code is implemented to determine how quickly the processor can execute a "do nothing" loop.

c. Critical Section In concurrent programming a critical section is a piece of code

that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will have to wait a fixed time to enter it (aka bounded waiting). Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore.

10. Explain how Banker’s algorithm is used to check safe state of a system.

The Banker's algorithm is run by the operating system whenever a process requests resources. The algorithm prevents deadlock by denying or postponing the request if it determines that accepting the

Page 8: Bt0070 operating systems  1

request could put the system in an unsafe state (one where deadlock could occur). When a new process enters a system, it must declare the maximum number of instances of each resource type that may not ex-ceed the total number of resources in the system. Also, when a process gets all its requested resources it must return them in a finite amount of time.

A state is considered safe if it is possible for all processes to finish ex-ecuting (terminate). Since the system cannot know when a process will terminate, or how many resources it will have re-quested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the system is not particularly concerned with how long each process runs (at least not from a deadlock avoid-ance perspective). Also, if a process terminates without acquiring its maximum resources, it only makes it easier on the system.