Upload
reginald-poole
View
212
Download
0
Tags:
Embed Size (px)
Citation preview
1
I/O Management and Disk Scheduling
Chapter 11
2
I/O Devices
Can be group into three categories:1. Human readable
Used to communicate with the computer userE.g.:– Printers– Video display terminals
» Display» Keyboard» Mouse
3
I/O Devices
2. Machine readableUsed to communicate with electronic equipmentE.g.:– Disk and tape drives– Sensors– Controllers– Actuators
4
I/O Devices
3. CommunicationUsed to communicate with remote devicesE.g.:– Digital line drivers– Modems
5
I/O Devices
There are differences in I/O devices:1. Data rate
Maybe differences of several orders of magnitude between the data transfer rates. As shown in Figure 11.1
2. ApplicationThe use to which the device is put has an influence on the software and policies of the OS and supporting facilities.– Disk used to store files requires file
management software– Disk used to store virtual memory pages
needs special hardware and software to support it
7
I/O Devices
– Terminal used by system administrator may have a higher priority
3. Complexity of controlA printer requires a simple control interface compared with a disk that require much more complex interface.
8
I/O Devices
4. Unit of transferData may be transferred as a stream of bytes for a terminal OR in larger blocks for a disk
5. Data representationDifferent encoding schemes used by different devices.
6. Error ConditionsDevices respond to errors differently
9
Organization of the I/O Function
Recap that there are three techniques for performing I/O:
1. Programmed I/OProcessor issues an I/O command to I/O module on behalf of a processProcess is busy-waiting for the operation to complete before proceeding.
2. Interrupt-driven I/OI/O command is issued by the processorProcessor continues executing instructionsI/O module sends an interrupt when done
10
Organization of the I/O Function
3. Direct Memory AccessDMA module controls exchange of data between main memory and the I/O deviceProcessor sends a request for the transfer of a block of data to the DMA module and interrupted only after entire block has been transferred
Table 11.1 indicates the relationship among these three techniques.
11
Organization of the I/O Function
12
Organization of the I/O Function - Evolution of the I/O Function
Processor directly controls a peripheral deviceController or I/O module is added
Processor uses programmed I/O without interruptsProcessor does not need to handle details of external devices
13
Organization of the I/O Function - Evolution of the I/O Function
Controller or I/O module with interruptsProcessor does not spend time waiting for an I/O operation to be performedIncreasing efficiency
Direct Memory AccessBlocks of data are moved into memory without involving the processorProcessor involved at beginning and end only
14
Organization of the I/O Function - Evolution of the I/O Function
I/O module is a separate processorCPU directs the I/O processor to execute and I/O program in main memory.I/O processor fetches and executes these instructions without processor intervention.Allow processor to specify a sequence of I/O activities and to be interrupted only when the entire sequence has been performed.
I/O processorI/O module has its own local memoryIts a computer in its own right
15
Organization of the I/O Function - Direct Memory Access
Figure 11.2 show a typical DMA block diagram.Processor delegates I/O operation to the DMA moduleDMA module transfers data directly to or from memoryWhen complete DMA module sends an interrupt signal to the processor
16
Organization of the I/O Function - Direct Memory Access
17
Organization of the I/O Function - Direct Memory Access
DMA mechanism can be configured in many ways.Some possibilities shown is Figure 11.3.
18
Organization of the I/O Function - Direct Memory Access
19
Organization of the I/O Function - Direct Memory Access
20
Operating System Design Issues –Design Objectives
EfficiencyMost I/O devices extremely slow compared to main memoryUse of multiprogramming allows for some processes to be waiting on I/O while another process executesI/O cannot keep up with processor speedSwapping is used to bring in additional Ready processes which is an I/O operation
21
Operating System Design Issues –Design Objectives
GeneralityDesirable to handle all I/O devices in a uniform manner
Applies both to the way processes view I/O devices and the way in which OS manages I/O devices and operations.
Hide most of the details of device I/O in lower-level routines so that processes and upper levels see devices in general terms such as read, write, open, close, lock, unlock
22
Operating System Design Issues –Logical Structure of the I/O Function
I/O facility leads to the type of organization suggested by Figure 11.4The details of the organization will depend on the type of device and the application.There are three most important logical structures as shown:
Local peripheral device Communication portFile system
23
Operating System Design Issues –Logical Structure of the I/O Function
Consider the simplest case of a local peripheral device that communicates in a simple fashion such as a stream of bytes or records.For Figure 11.4a, the following layers are involved:
Logical I/ODeals with the device as a logical resource and is not concerned with the details of actually controlling the device.Concerned on managing general I/O functions on behalf of user process
– Open, close, read and write
24
Operating System Design Issues –Logical Structure of the I/O Function
Device I/OThe requested operations and data are converted into appropriate sequences of I/O instructions, channel commands and controller orders.Buffering techniques maybe used to improve utilization.
Scheduling and controlActual queuing and scheduling of I/O operations occurs together with controlling the operationsInteracts with the I/O module and the device hardware
25
Operating System Design Issues –Logical Structure of the I/O Function
For Figure 11.4b, same as Figure 11.4a.The principle difference is that logical I/O module is replaced by communication architecture.
Consists of a number of layers– E.g. TCP/IP
26
Operating System Design Issues –Logical Structure of the I/O Function
For Figure 11.4c shows a representative structure for managing I/O on secondary storage device that support file system.The three layers that not previously discussed:
Directory ManagementSymbolic file names converted to identifiers that either reference the file directly or indirectly through a file descriptor or index table.Concern with user operations that affect the directory of files
– Add, delete
27
Operating System Design Issues –Logical Structure of the I/O Function
File SystemDeals with the logical structure of files and with the operations that can be specified by user
– Open, close, read, write
Access rights are managed at this layer
Physical OrganizationTranslation of virtual address to physical address using segmentation or paging.Allocation of secondary storage space and main storage buffers
29
I/O Buffering
Suppose user process wishes to read blocks of data from a tape one at a time
Each block have a length of 512 bytes.Data are to be read into a data area within the address space of the user process at virtual address 1000 to 1511Execute an I/O command to the tape unit and wait for the data to become available.
There are two problems with this approach.1. Program user hung up waiting for the I/O to
complete2. Interferes with swapping decisions by OS
30
I/O Buffering
For problem 2,Virtual location 1000 – 1511 must remain in main memory during the course of block transfer, if not some data maybe lost.Risk of single process deadlock
Process suspend because waiting for I/O command and being swapped out.The process is blocked waiting for I/O event and the I/O operation is blocked waiting for the process to be swapped in.
To avoid the problem used locking facility.
31
I/O Buffering
The same problem apply to an output operation:
If a block is being transferred from user process area directly to an I/O module, then the process is blocked during transfer and the process may not be swapped out.
To avoid these overhead and inefficiencies Buffering technique
Perform input transfers in advance of requests being madePerform output transfers sometime after the request being made.
32
I/O Buffering
Reasons for bufferingProcesses must wait for I/O to complete before proceedingCertain pages must remain in main memory during I/O
There two types of I/O devices:1. Block oriented2. Stream Oriented
33
I/O Buffering
Block-orientedInformation is stored in fixed sized blocksTransfers are made a block at a timeUsed for disks and tapes
Stream-orientedTransfer information as a stream of bytesUsed for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage
34
I/O Buffering – Single Buffer
Operating system assigns a buffer in main memory for an I/O requestBlock-oriented
Input transfers made to bufferWhen transfer complete, process moves the block into user space and request for another block.
Reading ahead
35
I/O Buffering – Single Buffer
Speed up compared to no buffering.User process can process one block of data while next block is read inSwapping can occur since input is taking place in system memory, not user memoryOperating system keeps track of assignment of system buffers to user processes.
For output, when data are being transmitted to a device, they first copied from user space to buffer then will be written to disk.
36
I/O Buffering – Single Buffer
Stream orientedUsed a line at time OR byte at a timeUser input from a terminal is one line at a time with carriage return signaling the end of the lineOutput to the terminal is one line at a timeBuffer used to hold a single line
User process is suspended during input because waiting for arrival of the entire line.For output user process can place a line of output in the buffer and continue processing.
37
I/O Buffering
38
I/O Buffering – Double Buffer
Use two system buffers instead of oneA process can transfer data to or from one buffer while the operating system empties or fills the other bufferThis technique is known as double buffering or buffer swapping
39
I/O Buffering – Double Buffer
40
I/O Buffering – Circular Buffer
More than two buffers are usedEach individual buffer is one unit in a circular bufferUsed when I/O operation must keep up with process
41
I/O Buffering – Circular Buffer
42
I/O Buffering
Buffering is a technique that smooth out peaks in I/O demand.However no amount of buffering will allow an I/O device to keep pace with a process.Even with multiple buffers, all of the buffers will eventually fill up and the process will have to wait after processing each chunk of data.In multiprogramming buffering can be a tool that can increase the efficiency of the operating system and the performance of individual processes.
43
Disk Scheduling –Disk Performance Parameters
When the disk drive operating, the disk is rotating at a constant speed To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sectorTrack selection involves moving the head in a movable-head system or electronically selecting one head on a fixed head system.Time it takes to position the head at the desired track Seek time
44
Disk Scheduling –Disk Performance Parameters
Once track is selected, the disk controller waits until the appropriate sector rotates to line up with the head.The time it takes for the beginning of the sector to reach the head Rotational delayThe sum of the seek time and the rotational delay Access time: time it takes to get into position to read or write.Once the head is in position, the read or write operation is performed as the sector moves under the head
data transfer operation
45
Disk Scheduling –Disk Performance Parameters
The time required for the transfer Transfer time.
46
Disk Scheduling –Disk Performance Parameters
47
Disk Scheduling – Policies
Seek time is the reason for differences in performanceIf sector access requests involve selection of tracks at random, then the performance of the disk I/O will be as poor as possible.To improve, need to reduce the time spent on seeksFor a single disk there will be a number of I/O requests (read and write) from various processes in a queueIf requests are selected randomly poor performance
48
Disk Scheduling - Policies
First-in, first-out (FIFO)Process request sequentiallyFair to all processesApproaches random scheduling in performance if there are many processes
49
Disk Scheduling - Policies
PriorityGoal is not to optimize disk use but to meet other objectivesShort batch jobs may have higher priorityProvide good interactive response time
50
Disk Scheduling - Policies
Last-in, first-outGood for transaction processing systems
The device is given to the most recent user so there should be little arm movement
Possibility of starvation since a job may never regain the head of the line
51
Disk Scheduling Policies
Shortest Service Time FirstSelect the disk I/O request that requires the least movement of the disk arm from its current positionAlways choose the minimum Seek time
52
Disk Scheduling Policies
SCANArm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that directionDirection is reversed
53
Disk Scheduling Policies
C-SCANRestricts scanning to one direction onlyWhen the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again
54
Disk Scheduling Policies
N-step-SCANSegments the disk request queue into subqueues of length NSubqueues are processed one at a time, using SCANNew requests added to other queue when queue is processed
FSCANTwo queuesOne queue is empty for new requests
56
RAID
Redundant Array of Independent DisksSet of physical disk drives viewed by the operating system as a single logical driveData are distributed across the physical drives of an arrayRedundant disk capacity is used to store parity information
57
RAID 0 (non-redundant)
58
RAID 1 (mirrored)
59
RAID 2 (redundancy through Hamming code)
60
RAID 3 (bit-interleaved parity)
61
RAID 4 (block-level parity)
62
RAID 5 (block-level distributed parity)
63
RAID 6 (dual redundancy)
64
Disk Cache
Buffer in main memory for disk sectorsContains a copy of some of the sectors on the diskWhen an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache.
Yes, request is satisfied via cacheNo, the requested sector is read into the disk cache from the disk.
65
Disk Cache – Design Consideration
Several design issues:When an I/O request is satisfied from the disk cache, the data in the disk cache must be delivered to requesting process.
It can be done either– Transferring the block of data within main
memory from the disk cache to memory assigned to user process
– Using a shared memory capability and passing a pointer to appropriate slot in the disk cache.
» Saves time.
66
Disk Cache – Design Consideration
When a new sector is brought into the disk cache, one of the existing blocks must be replaced.
Least Recently Used (LRU)Least Frequently Used (LFU)
67
Disk Cache – Design Consideration
Least Recently Used (LRU)Replace the block that has been in the cache the longest with no reference to it.The cache consists of a stack of blocksMost recently referenced block is on the top of the stackWhen a block is referenced or brought into the cache, it is placed on the top of the stackThe block on the bottom of the stack is removed when a new block is brought inBlocks don’t actually move around in main memoryA stack of pointers is used
68
Disk Cache – Design Consideration
Least Frequently Used (LFU)The block that has experienced the fewest references is replacedA counter is associated with each blockCounter is incremented each time block accessed.Block with the smallest count is selected for replacement.Some blocks may be referenced many times in a short period of time and the reference count is misleading