20
SURESH KUMAR SUTHAR Roll No: 520776763 BCA 3 rd Semester BC0042 – 01 OPERATING SYSTEMS Part-A 1. List out salient Operating System functions. Ans: Modern operating systems generally have following three major goals. Operating systems generally accomplish these goals by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state. Operating System Function: i) To hide details of hardware ii) Resources Management iii) Provide of Operating System 2. List out major components of Operating System Ans: Modern operating systems share the same goal of supporting the following types of system components: i) Process Management ii) Main-Memory Management iii) File Management SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS 1

BC0042 Operating Systems4

Embed Size (px)

Citation preview

Page 1: BC0042 Operating Systems4

SURESH KUMAR SUTHARRoll No: 520776763BCA 3rd Semester

BC0042 – 01

OPERATING SYSTEMS

Part-A

1. List out salient Operating System functions.

Ans:

Modern operating systems generally have following three major goals. Operating

systems generally accomplish these goals by running processes in low privilege

and providing service calls that invoke the operating system kernel in high-

privilege state.

Operating System Function:

i) To hide details of hardware

ii) Resources Management

iii) Provide of Operating System

2. List out major components of Operating System

Ans:

Modern operating systems share the same goal of supporting the following types

of system components:

i) Process Management

ii) Main-Memory Management

iii) File Management

iv) I/O System Management

v) Secondary-Storage Management

vi) Networking

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

1

Page 2: BC0042 Operating Systems4

vii) Protection System

viii) Command Interpreter System

3. Define Process.

Ans:

A process is created to execute a command. The code file for the command is

used to initialize the virtual memory containing the user code and global variable.

The user stack for the initial thread is cleared, and the parameters to the

command are passed as parameters to the main function of the program. Files

are opened corresponding to the standard input and output (Keyboard an screen,

unless file redirection is used).

4. Define Threads.

Ans:

A thread is an instance of execution (the entity that executes). All the threads that

make up a process share access to the same user program, virtual memory,

open files, and other operating system resources. Each thread has its own

program counter, general purpose registers, and user and kernel stack. The

program counter and general purpose registers for a thread are stored in the

CPU when the tread is executing.

5. Mention the necessary conditions for Deadlock occurrence.

Ans:

In order for deadlock to occur, four conditions must be true.

i) Mutual Exclusion- Each resource is either currently allocated to

exactly one process or it is available.

ii) Hold and Wait- Processes currently holding resources can request

new resources

iii) No Preemption- Once a process holds a resource; it cannot be taken

away by another process or the kernel.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

2

Page 3: BC0042 Operating Systems4

Part-B

1. Write a note on:

Real Time Operating Systems (RTOS)

Distributed Operating Systems

Ans:

Real Time Operating Systems (RTOS):

The advent of timesharing provided good response times to computer users,

However, timesharing could not satisfy the requirements of some applications.

Real-time (RT) operating systems were developed to meet the response

requirements of such applications.

There are two flavors of real-time systems. A hard real-time system guarantees

that critical tasks complete at a specified time. A less restrictive type of real time

system is soft real-time system, where a critical real-time task gets priority over

other tasks, and retains that priority until it completes. The several areas in which

this type is useful are multimedia, virtual reality, and advance scientific projects

such as undersea exploration and planetary roves. Because of the expanded

uses for soft real-time functionality, it is finding its way into most current operating

systems, including major versions of Unix and Windows NTOS.

A real-time operating system is one, which helps to fulfill the worst-case

response time• requirements of an application. An RT OS provides the

following facilities for this purpose: 1

1. Multitasking within an application.

2. Ability to define the priorities of tasks,

3.Priority driven or deadline oriented scheduling.

4. Programmer defined interrupts.

A task is a sub-computation in an application program, ‘which can be executed

concurrently with other sub-computations in the program, except at specific

places in its execution called synchronization points. Multitasking) which permits

‘the existence of many tasks within the application program provides the

possibility of overlapping the CPU and I/O activities. of the application with one

another. This helps in reducing its elapsed time. The ability to specify priorities

for the tasks provides additional controls to a designer while structuring an

application to meet its response-time requirements.

Real time operating systems (RTOS) are specifically designed to respond to

events that happen in real time. This can include computer systems that run

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

3

Page 4: BC0042 Operating Systems4

factory floors, computer systems for emergency room or intensive care unit

equipment (or even the entire ICU), computer systems for air traffic control, or

embedded systems. RTOSs are grouped according to the response time that is

acceptable (seconds, milliseconds, microseconds) and according to whether or

not they involve systems where failure can result in loss of life.

Distributed Operating Systems:

A recent trend in computer system is to distribute computation among several

processors. In the loosely coupled systems the processors do not share memory

or a clock. Instead, each processor has its own local memory. The processors

communicate with one another using communication network.

The processors in a distributed system may vary in size and function, and

referred by a number of different names, such as sites, nodes, computers and so

on depending on the context. The major reasons for building distributed systems

are;

Resource sharing: if a number of different sites are connected to one another,

then, a user at one site may be able to use the resources available at the other.

Computation speed up: If a particular computation can be partitioned into a

number of sub computations that can run concurrently, then a distributed system

may allow a user to distribute computation among the various sites to run them

concurrently.

Reliability: If one site fails in a distributed system, the remaining sites can

potentially continue operations.

Communication: There are many instances in which programs need to exchange

data with one another. Distributed data base system is an example of this.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

4

Page 5: BC0042 Operating Systems4

2. Write a note on:

Micro Kernel Architecture

Ans:

We have already seen that as UNIX expanded, the kernel became large and

difficult to manage. In the mid-1980s, researches at Carnegie Mellon University

developed an operating system called Mach that modularized the kernel using

the microkernel approach. This method structures the operating system by

removing all nonessential components from the kernel and implementing then

as system and user-level programs. The result is a smaller kernel. There is little

consensus regarding which services should remain in the kernel and which

should be implemented in user space. Typically, however, micro-kernels

provide minimal process and memory management, in addition to a

communication facility.

The main function of the microkernel is to provide a communication facility

between the client program and the various services that are also running in

user space. Communication is provided by message passing. For example, it

the client program and service never interacts directly. Rather, they

communicate indirectly by exchanging messages with the microkernel.

On benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware design to another. The microkernel also provided more security and reliability, since most services are running as user — rather than kernel — processes, if a service fails the rest of the operating system remains untouched.

Several contemporary operating systems have used the microkernel approach. TruB4 UNIX (formerly Digital UNIX provides a UNIX interface to the user, but it is implemented with a March kernel. The March kernel maps UNIX system calls into ‘messages to the appropriate user-level services.

Device Drivers

File Server

Client Process

•••• Virtual Memory

Microkernel

— Hardware

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

5

Page 6: BC0042 Operating Systems4

Unix Architecture

Ans:

The following figure shows the UNIX operating system architecture. At the

center is hardware, covered by kernel. Above that are the UNIX utilities, and

command interface, such as shell (sh), etc.

UNIX kernel Components

The UNIX kernel has components as depicted in the figure .The figure is divided in to three modes: user mode, kernel mode, and hardware.

The user mode contains user programs which can access the services of

the kernel components using system call interface,

The kernel mode has four major components: system calls, tile

subsystem,

process control subsystem, and hardware control. The system calls are

interface between user programs and tile and process control

subsystems.

The file’ subsystem is responsible for file and I/O management through

device drivers.

The process control subsystem contains scheduler, Inter-process

communication and memory management. Finally the hardware control is

the interface between these two subsystems and hardware.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

6

Page 7: BC0042 Operating Systems4

3. Bring out the features of Best Fit, Worst Fit and First Fit with a block

diagrams illustrating them.

Ans:

In addition to the responsibility of managing processes, the operating system

must efficiently manage the primary memory of the computer. The part of the

operating system which handles this responsibility is called the memory

manager. Since every process must have some amount of primary memory

in order to execute, the performance of the memory manager is crucial to the

performance of the entire system. Nut explains “The Memory Manager is

responsible for allocating primary memory to processes and for assisting the

programmer in loading and storing the contents of the primary memory.

Managing the sharing of primary memory and minimizing memory access

time are the basic goals of the memory manager.”

The real challenge of efficiencies managing memory is seen in the case of a system which has multiple processes running at the same time. Since primary memory can be space-multiplexed, the memory manager can allocate a portion of primary memory to each process for its own use. However, the memory manager must keep track of which processes are running in which memory locations, and it must also determine how to allocate and de-allocate available memory when new processes are created and when old processes complete execution. While various different strategies are used to allocate space to processes competing for memory, three of the most popular are Best fit, Worst fit, and First fit, each of these strategies are described below:

• Best fit: The allocator places a process in the smallest block of unallocated memory in which it will fit, For example, suppose a process requests 12KB of

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

7

Page 8: BC0042 Operating Systems4

memory and the memory manager currently has a list of unallocated blocks of 6KB, 14KB, 19KB, 11KB, and 13KB blocks. The best-fit strategy will allocate 12KB of the 13KB block to the process. • Worst fit: The memory manager places a process in the largest block of unallocated memory available. The idea is that this placement will create the largest hold after the allocations, thus increasing the possibility that., compared to best fit, another process can use the remaining space. Using the same example as above, worst fit will allocate 12KB of the 19KB block to the process, leaving a 7KB block for future use. • First fit: There may be many holes in the memory, so the operating system, to reduce the amount of time it spends analyzing the available spaces, begins at the start of primary memory and allocates memory from the first hole it encounters large enough to satisfy the request. Using the same example as above, first fit will allocate 12KB of the 14KB block to the process.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

8

Page 9: BC0042 Operating Systems4

4. Bring out the salient features of:

Paging

Paging

For paging memory management, each process is associated with a page

table. Each entry in the table contains the frame number of the

corresponding page in the virtual address space of the process. This same

page table is also the central data structure for virtual memory mechanism

based on paging, although more facilities are needed.4.21 Control bits

Since only some pages of a process may be in main memory, a bit in the

page table entry, P in Figure 1(a), is used to indicate whether the

corresponding page is present in main memory or not. Another control bit

needed in the page table entry is a modified bit, M, indicators whether the

content of the corresponding page have been altered or not since the page

was last loaded into main memory. We often say swapping in and swapping

out, suggesting that a process is typically separated into two parts, one

residing in main memory and the other in secondary memory, and some

pages may be removed from one part and join the other. They together

make up of the whole process image. Actually the secondary memory

contains the whole image of the process and part of it may have been

loaded into main memory. When swapping out is to be performed, typically

the page to• be swapped out may be simply overwritten by the new page,

since the corresponding page is already on secondary memory. However

sometimes the content of a page may have been altered at runtime, say a

page containing data. In this case, the alteration should be reflected in

secondary memory. So when the M bit is 1, then the page to be swapped

out should be written out. Other bits may also be used for sharing or

protection.

Segmentation

Segmentation is another popular method for both memory management

and virtual memory. It has a number of advantage:

1. It fits the programmers’ need to organize the programs. With

segmentation, a program may be divided into several segments

with each for a specific function or instructions relating to each

other in some way.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

9

Page 10: BC0042 Operating Systems4

2. Segmentation allows different places of a program to be

complied independently. Thus it is not needed to recompile and

relink the whole program after a single revision is made.

3. Segmentation eases the control of sharing and protection.

5. Define Cache Memory:

Bring out the concepts of Basic Cache Memory structure

Concept of Associative mapping

Fully Associative mapping

Cache Memory:

Cache memory is random access memory (RAM) that a computer

microprocessor can access more quickly than it can access regular RAM.

As the microprocessor processes data, it looks first in the cache memory

and if it finds the data there (from a previous reading of data), it does not

have to do the more time-consuming reading of data from larger memory.

Basic Cache Memory structure:

Processors are generally able to perform operations on operands faster

than the access time of large capacity main memory. Though

semiconductor memory which can operate at speeds comparable with the

operation of the processor exists, it is not economical to provide all the

main memory with very high speed semiconductor men46fC The problem

can be alleviated by introducing a small block of high speed memory called

a cache between the main memory and the processor.

The idea of cache memory is similar to virtual memory in that some active

portion of a low-speed memory is stored in duplicate in a higher-speed

cache memory. When a memory request is generated, the request is first

presented to the cache memory, and if the cache cannot respond, the

request is then presented to main memory.

The difference between cache and virtual memory is a matter of

implementation; the two notions are conceptually the same because they

both rely on the correlation properties observed in sequences of address

references. Cache implementations are totally different from virtual memory

implementation because of the speed requirements of cache.

We define a cache miss to be a reference to a item that is not resident in

cache, but is resident in main memory. The corresponding concept for

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

10

Page 11: BC0042 Operating Systems4

cache memories is page fault, which is defined to be a reference to a page

in virtual memory that is not resident in main memory. For cache misses,

the fast memory is cache and the slow memory is main memory. For page

faults the fast memory is main memory, and the slow memory is auxiliary

memory.

A cache-memory reference. The tag 0117X matches address 01173, so the

cache returns the item In the position X=3 of the matched block

A cell in memory is presented to the cache. The cache searches its directory

of address tags shown in the figure to see if the item is in the cache. If the

item is not in the cache, a miss occurs.

For READ operations that cause a cache miss, the Item is retrieved from

main memory and copied into the cache. During the short period available

before the main-memory operation is complete, some other item in cache is

removed form the cache to make rood for the new item.

The cache-replacement decision is critical; a ..good replacement algorithm

can yield somewhat higher performance than can a bad replacement

algorithm. The effective cycle-time of a cache memory (t0) is the average of

cache-memory cycle time (t0) and main-memory cycle time (tmn) where the

probabilities in the averaging process are the probabilities of hits and misses.

If we consider only READ operations, then a formula for the average cycle-

time is:

teff =tcache +(1-h) tmain

Fully Associative Mapping:

Perhaps the most obvious way of relating cached data to the main memory

address is to store both memory address and data together in the cache. This

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

11

Page 12: BC0042 Operating Systems4

the fully associative mapping approaches. A fully associative cache requires

the cache to be composed of associative memory holding both the memory

address and the data for each cached line. The incoming memory address is

simultaneously compared with all stored addresses using the internal logic of

the associative memory, as shown in Fig. 13. If a match is fund, the

corresponding data is read out. Single words form anywhere within the main

memory could be held in the cache, if the associative part of the cache is

capable of holding a full address.

Thy fully associative mapping cache gives the greatest flexibility of holding

combinations of blocks in the cache and minimum conflict for a given sized

cache, but is also the most expensive, due to the cost of the associative

memory. It requires a replacement algorithm must be implemented in

hardware to maintain a high speed of operation. The fully associative cache

can be formed economically with a moderate size capacity. Microprocessors

with small internal caches often employ the fully associative mechanism.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

12

Page 13: BC0042 Operating Systems4

6. For the following reference string with three page memory frame:

6, 0, 1, 2, 3, 4, 2, 3, 6, 0, 3, 2, 1, 2, 0, 1, 6, 1, 0, 3

a. Apply FIFO algorithm

b. Apply LRU algorithm

c. Apply Optimal algorithm

d. Apply Clock algorithm

Ans:

FIFO

The FIFO policy treats the page frames allocated to a process as a circular

buffer, and pages are removed in round-robin style. It may be viewed as a

modified version of the LRU policy, and this time instead of the least recently

used, the earliest used page is replace since the page that has resided in

main memory for the longest time will also least likely used in the future. This

logic may be wrong sometimes if some part of the program is constantly

used, which thus may lead to more page faults. The advantage of this policy

is that it is one of the simplest page replacement policies to implement, since

all that is needed is a pointer that circles through the page frames of the

process.

Least recently used (LRU)

Although we do not exactly know the future, we can predict the future to

some extent based on the history. Based on the principle of locality, the

page that has not been used for the longest time is also least likely to

be referenced. The LFU algorithm thus selects that page to be replaced.

And experience tells that the LRU policy does nearly as well as the

optimal policy. However since the decision making is based on the

history, the system has to keep the references that have been made all

the way from the beginning of the execution. The overhead would be

tremendous. With the above example, Figure 8 shows there are 4 page

faults in the case of the LRU algorithm.

Optimal

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

13

Page 14: BC0042 Operating Systems4

If we know which page is least likely to be referenced later on, then we simply

select that page, which is undoubtedly the optimal solution. However

unfortunately this is not realistic because we cannot ‘know exactly what is

going to happen in the future. The value of discussing this algorithm is that it

may be a benchmark to evaluate the performance of other algorithms.

Suppose we have a process, which is made up of S pages, and 3 frames in

main memory are allocated for this process. And we already know the

references to those pages are in the order:

232152453252

Clock

The clock policy is basically a variant of the FIFO policy, except that it also

considers to some extent the last accessed times of pages by associating an

additional bit with each frame, referred to as the use bit. And when a page is

referenced, its use bit is set to 1. The set of frames that might be selected for

replacement is viewed as a circular buffer, with which a pointer is associated.

When a free frame is needed but not available, the system scans the buffer to

find a frame with a use bit of 0 and the first frame of this kind will be selected

for replacement. During the scan, whenever a frame with a use bit of 1 is

met, the bit is reset to 0. Thus if all the frames have a use bit of 1, then the

pointer will make a complete cycle through the buffer, setting all the use bits

to 0, and stop at its original position, replacing the page in that frame. And

after a replacement is made, the pointer is set to point to the next frame in the

buffer.

SURESH KUMAR SUTHAR Roll No: 520776763 BC0042 – 01 OPERATING SYSTEMS

14