Upload
joel-arnold
View
215
Download
1
Embed Size (px)
Citation preview
ConcurrencyCS-502 Fall 2007 1
Concurrency
CS-502 Operating SystemsFall 2007
(Slides include materials from Operating System Concepts, 7th ed., by Silbershatz, Galvin, & Gagne and from Modern Operating Systems, 2nd ed., by Tanenbaum)
ConcurrencyCS-502 Fall 2007 2
Concurrency
• Since the beginning of computing, management of concurrent activity has been the central issue
• Concurrency between computation and input or output
• Concurrency between computation and user• Concurrency between essentially independent
computations that take place at same time• Concurrency between parts of large computations
that are divided up to improve performance• …
ConcurrencyCS-502 Fall 2007 3
Example – 1960s
• Programmers tried to write programs that would read from input devices and write to output devices in parallel with computing
• Card readers, paper tape, line printers, etc.
• Challenges• Keeping the buffers straight
• Synchronizing between read activity and computing
• Computing getting ahead of I/O activity
• I/O activity getting ahead of computing
ConcurrencyCS-502 Fall 2007 4
Shared Computing Services
• Multiple simultaneous, independent users of large computing facilities
• E.g., Time Sharing systems of university computing centers
• Data centers of large enterprises• Multiple accounting, administrative, and data
processing activities over common databases
• …
ConcurrencyCS-502 Fall 2007 5
Modern PCs and Workstations
• Multiple windows in personal computer doing completely independent things
• Word, Excel, Photoshop, E-mail, etc.
• Multiple activities within one application– E.g., in Microsoft Word
• Reading and interpreting keystrokes
• Formatting line and page breaks
• Displaying what you typed
• Spell checking
• Hyphenation
• ….
ConcurrencyCS-502 Fall 2007 6
Modern Game Implementations
• Multiple characters in game• Concurrently & independently active
• Multiple constraints, challenges, and interactions among characters
• Multiple players
ConcurrencyCS-502 Fall 2007 7
Technological Pressure
• From early 1950s to early 2000s, single processor computers increased in speed by 2 every 18 months or so
• Moore’s Law
• Multiprocessing was somewhat of a niche problem
• Specialized computing centers, techniques
ConcurrencyCS-502 Fall 2007 8
Technological Pressure (continued)
• No longer!
• Modern microprocessor clock speeds are no longer increasing with Moore’s Law
• Microprocessor density on chips still is!
multi-core processors are now de facto standard
• Even on low-end PCs!
ConcurrencyCS-502 Fall 2007 9
Traditional Challenge for OS
• Useful set of abstractions that help to – Manage concurrency– Manage synchronization among concurrent
activities– Communicate information in useful way among
concurrent activities– Do it all efficiently
ConcurrencyCS-502 Fall 2007 10
Modern Challenge
• Methods and abstractions to help software engineers and application designers – Take advantage of inherent concurrency in
modern systems– Do so with relative ease
ConcurrencyCS-502 Fall 2007 11
Fundamental Abstraction
• Process
• … aka Task
• … aka Thread
• … aka Task
• … aka [other terms]
ConcurrencyCS-502 Fall 2007 12
Process (a generic term)
• Definition: A particular execution of a program.
• Requires time, space, and (perhaps) other resources
• Separate from all other executions of the same program
• Even those at the same time!
• Separate from executions of other programs
ConcurrencyCS-502 Fall 2007 13
Process (a generic term – continued)
• …• Can be
• Interrupted• Suspended• Blocked• Unblocked• Started or continued
• Fundamental abstraction of all modern operating systems
• Also known as thread (of control), task, job, etc.• Note: a Unix/Windows “Process” is a heavyweight concept
with more implications than this simple definition
ConcurrencyCS-502 Fall 2007 14
Process (a generic term – continued)
• Concept emerged in 1960s
• Intended to make sense out of mish-mash of difficult programming techniques that bedeviled software engineers
• Analogous to police or taxi dispatcher!
ConcurrencyCS-502 Fall 2007 15
Background – Interrupts
• A mechanism in (nearly) all computers by which a running program can be suspended in order to cause processor to do something else
• Two kinds:–• Traps – synchronous, caused by running program
– Deliberate: e.g., system call
– Error: divide by zero
• Interrupts – asynchronous, spawned by some other concurrent activity or device.
• Essential to the usefulness of computing systems
ConcurrencyCS-502 Fall 2007 16
Hardware Interrupt Mechanism
• Upon receipt of electronic signal, the processor• Saves current PSW to a fixed location• Loads new PSW from another fixed location
• PSW — Program Status Word• Program counter• Condition code bits (comparison results)• Interrupt enable/disable bits• Other control and mode information
– E.g., privilege level, access to special instructions, etc.
• Occurs between machine instructions• An abstraction in modern processors (see Silbershatz, §1.2.1
and §13.2.2)
ConcurrencyCS-502 Fall 2007 17
Interrupt Handler
/* Enter with interrupts disabled */Save registers of interrupted computationLoad registers needed by handler
Examine cause of interruptTake appropriate action (brief)
Reload registers of interrupted computationReload interrupted PSW and re-enable interrupts
or
Load registers of another computationLoad its PSW and re-enable interrupts
ConcurrencyCS-502 Fall 2007 18
Requirements of interrupt handlers
• Fast• Avoid possibilities of interminable waits• Must not count on correctness of interrupted
computation• Must not get confused by multiple interrupts in
close succession• …
• More challenging on multiprocessor systems
ConcurrencyCS-502 Fall 2007 19
Result
• Interrupts make it possible to have concurrent activities at same time
• Don’t help in establishing some kind of orderly way of thinking
• Need something more
ConcurrencyCS-502 Fall 2007 20
• Hence, emergence of generic concept of process – (or whatever it is called in a particular operating
system and environment)
ConcurrencyCS-502 Fall 2007 21
Information the system needs to maintain about a typical process
• PSW (program status word)• Program counter• Condition codes• Control information – e.g., privilege level, priority,
etc
• Registers, stack pointer, etc.• Whatever hardware resources needed to compute
• Administrative information for OS• Owner, restrictions, resources, etc.
• Other stuff …
ConcurrencyCS-502 Fall 2007 22
Process Control Block (PCB)(example data structure in an OS)
ConcurrencyCS-502 Fall 2007 23
Switching from process to process
ConcurrencyCS-502 Fall 2007 24
Result
• A very clean way of thinking about separate computations
• Processes can appear be executing in parallel
• Even on a single processor machine
• Processes really can execute in parallel• Multiprocessor or multi-core machine
• …
ConcurrencyCS-502 Fall 2007 25
Process States
ConcurrencyCS-502 Fall 2007 26
The Fundamental Abstraction
• Each process has its “virtual” CPU
• Each process can be thought of as an independent computation
• On a fast enough CPU, processes can look like they are really running concurrently
ConcurrencyCS-502 Fall 2007 27
Questions?
ConcurrencyCS-502 Fall 2007 28
Challenge
• How can we help processes synchronize with each other?
• E.g., how does one process “know” that another has completed a particular action?
• E.g, how do separate processes “keep out of each others’ way” with respect to some shared resource
ConcurrencyCS-502 Fall 2007 29
Digression (thought experiment) static int y = 0;
int main(int argc, char **argv)
{ extern int y;
y = y + 1;
return y;}
ConcurrencyCS-502 Fall 2007 30
Digression (thought experiment) static int y = 0;
int main(int argc, char **argv)
{ extern int y;
y = y + 1;
return y;}
Upon completion of main, y == 1
ConcurrencyCS-502 Fall 2007 31
Process 1int main(int argc, char **argv){ extern int y;
y = y + 1;
return y;}
Process 2int main2(int argc, char **argv){ extern int y;
y = y - 1;
return y;}
Assuming processes run “at the same time,” what are possible values of y after both terminate?
Thought experiment (continued)
static int y = 0;
ConcurrencyCS-502 Fall 2007 32
Another Abstraction – Atomic Operation
• An operation that either happens entirely or not at all
• No partial result is visible or apparent
• Non-interruptible
• If two atomic operations happen “at the same time”
• Effect is as if one is first and the other is second
• (Usually) don’t know which is which
ConcurrencyCS-502 Fall 2007 33
Hardware Atomic Operations
• On (nearly) all computers, reading and writing operations of machine words can be considered as atomic
• Non-interruptible• It either happens or it doesn’t• Not in conflict with any other operation
• When two attempts to read or write the same data, one is first and the other is second
• Don’t know which is which!
• No other guarantees• (unless we take extraordinary measures)
ConcurrencyCS-502 Fall 2007 34
Definitions
• Definition: race condition• When two or more concurrent activities are trying to
do something with the same variable resulting in different values
• Random outcome
• Critical Region (aka critical section)• One or more fragments of code that operate on the
same data, such that at most one activity at a time may be permitted to execute anywhere in that set of fragments.
ConcurrencyCS-502 Fall 2007 35
Synchronization – Critical Regions
ConcurrencyCS-502 Fall 2007 36
Class Discussion
• How do we keep multiple computations from being in a critical region at the same time?
• Especially when number of computations is > 2
• Remembering that read and write operations are atomic
example
ConcurrencyCS-502 Fall 2007 37
Possible ways to protect critical section
• Without OS assistance — Locking variables & busy waiting– Peterson’s solution (p. 195-197)– Atomic read-modify-write – e.g. Test & Set
• With OS assistance — abstract synchronization operations
• Single processor
• Multiple processors
ConcurrencyCS-502 Fall 2007 38
Requirements – Controlling Access to a Critical Section
– Only one computation in critical section at a time
– Symmetrical among n computations– No assumption about relative speeds– A stoppage outside critical section does not
lead to potential blocking of others– No starvation — i.e. no combination of timings
that could cause a computation to wait forever to enter its critical section
ConcurrencyCS-502 Fall 2007 39
Non-solutionstatic int turn = 0;
Process 1
while (TRUE) {
while (turn !=0)
/*loop*/;
critical_region();
turn = 1;
noncritical_region1();
};
Process 2
while (TRUE) {
while (turn !=1)
/*loop*/;
critical_region();
turn = 0;
noncritical_region2();
};
ConcurrencyCS-502 Fall 2007 40
Non-solutionstatic int turn = 0;
Process 1
while (TRUE) {
while (turn !=0)
/*loop*/;
critical_region();
turn = 1;
noncritical_region1();
};
Process 2
while (TRUE) {
while (turn !=1)
/*loop*/;
critical_region();
turn = 0;
noncritical_region2();
};
What is wrong with this approach?
ConcurrencyCS-502 Fall 2007 41
Peterson’s solution (2 processes)static int turn = 0;
static int interested[2];
void enter_region(int process) {int other = 1 - process;
interested[process] = TRUE;turn = process;while (turn == process &&
interested[other] == TRUE)/*loop*/;
};
void leave_region(int process) {interested[process] = FALSE;
};
ConcurrencyCS-502 Fall 2007 42
Another approach: Test & Set Instruction(built into CPU hardware)
static int lock = 0; extern int TestAndSet(int &i);
/* sets the value of i to 1 and returns the previous value of i. */
void enter_region(int &lock) {while (TestAndSet(lock) == 1)
/* loop */ ;};
void leave_region(int &lock) {lock = 0;
};
ConcurrencyCS-502 Fall 2007 43
Test & Set Instruction(built into CPU hardware)
static int lock = 0; extern int TestAndSet(int &i);
/* sets the value of i to 1 and returns the previous value of i. */
void enter_region(int &lock) {while (TestAndSet(lock) == 1)
/* loop */ ;};
void leave_region(int &lock) {lock = 0;
};
What about this solution?
ConcurrencyCS-502 Fall 2007 44
Variations
• Compare & Swap (a, b)• temp = b• b = a• a = temp• return(a == b)
• …• A whole mathematical theory about efficacy of
these operations
• All require extraordinary circuitry in processor memory, and bus to implement atomically
ConcurrencyCS-502 Fall 2007 45
With OS assistance
• Implement an abstraction:–– A data type called semaphore
• Non-negative integer values.
– An operation wait_s(semaphore &s) such that• if s > 0, atomically decrement s and proceed.• if s = 0, block the computation until some other
computation executes post_s(s).
– An operation post_s(semaphore &s):–• Atomically increment s; if one or more
computations are blocked on s, allow precisely one of them to unblock and proceed.
ConcurrencyCS-502 Fall 2007 46
Critical Section control with Semaphorestatic semaphore mutex = 1;
Process 1
while (TRUE) {wait_s(mutex);
critical_region();
post_s(mutex);
noncritical_region1();};
Process 2
while (TRUE) {wait_s(mutex);
critical_region();
post_s(mutex);
noncritical_region2();};
ConcurrencyCS-502 Fall 2007 47
Critical Section control with Semaphorestatic semaphore mutex = 1;
Process 1
while (TRUE) {wait_s(mutex);
critical_region();
post_s(mutex);
noncritical_region1();};
Process 2
while (TRUE) {wait_s(mutex);
critical_region();
post_s(mutex);
noncritical_region2();};
Does this meet the requirements for controlling access to critical sections?
ConcurrencyCS-502 Fall 2007 48
Semaphores – History
• Introduced by E. Dijkstra in 1965.
• wait_s() was called P()• Initial letter of a Dutch word meaning “test”
• post_s() was called V()• Initial letter of a Dutch word meaning “increase”
ConcurrencyCS-502 Fall 2007 49
Abstractions
• The semaphore is an example of a new and powerful abstraction defined by OS
• I.e., a data type and some operations that add a capability that was not in the underlying hardware or system.
• Any program can use this abstraction to control critical sections and to create more powerful forms of synchronization among computations.
ConcurrencyCS-502 Fall 2007 50
Process and semaphoredata structures
class State {long int PSW;long int regs[R];/*other stuff*/
}class PCB {
PCB *next, prev, queue;State s;
PCB (…); /*constructor*/~PCB(); /*destructor*/
}
class Semaphore {int count;PCB *queue;
friend wait_s(Semaphore &s);friend post_s(Semaphore &s);
Semaphore(int initial);/*constructor*/
~Semaphore();/*destructor*/
}
ConcurrencyCS-502 Fall 2007 51
Implementation
Ready queue PCB PCB PCB PCB
Semaphore Acount = 0
PCB PCB
Semaphore Bcount = 2
ConcurrencyCS-502 Fall 2007 52
Implementation
• Action – dispatch a process to CPU• Remove first PCB from ready queue• Load registers and PSW• Return from interrupt or trap
• Action – interrupt a process• Save PSW and registers in PCB• If not blocked, insert PCB into ReadyQueue (in some order)• Take appropriate action• Dispatch same or another process from ReadyQueue
Ready queue PCB PCB PCB PCB
ConcurrencyCS-502 Fall 2007 53
Implementation – Semaphore actions
• Action – wait_s(Semaphore &s)• Implement as a Trap (with interrupts disabled)if (s.count == 0)
– Save registers and PSW in PCB
– Queue PCB on s.queue– Dispatch next process on ReadyQueue
else– s.count = s.count – 1;
– Re-dispatch current process
Ready queue PCB PCB PCB PCB
Event wait
ConcurrencyCS-502 Fall 2007 54
Implementation – Semaphore actions
• Action – post_s(Semaphore &s)• Implement as a Trap (with interrupts disabled)if (s.queue != null)
– Save current process in ReadyQueue– Move first process on s.queue to ReadyQueue– Dispatch some process on ReadyQueue
else– s.count = s.count + 1;– Re-dispatch current process
Ready queue PCB PCB PCB PCB
Semaphore Acount = 0
PCBPCB
Event completion
ConcurrencyCS-502 Fall 2007 55
Interrupt Handling
• (Quickly) analyze reason for interrupt• Execute equivalent post_s to appropriate
semaphore as necessary• Implemented in device-specific routines• Critical sections• Buffers and variables shared with device managers
• More about interrupt handling later in the course
ConcurrencyCS-502 Fall 2007 56
Timer interrupts
• Can be used to enforce “fair sharing”
• Current process goes back to ReadyQueue• After other processes of equal or higher priority
• Simulates concurrent execution of multiple processes on same processor
ConcurrencyCS-502 Fall 2007 57
Definition – Context Switch
• The act of switching from one process to another
• E.g., upon interrupt or some kind of wait for event
• Not a big deal in simple systems and processors
• Very big deal in large systems such • Linux and Windows
• Pentium 4Many microseconds!
ConcurrencyCS-502 Fall 2007 58
Complications for Multiple Processors
• Disabling interrupts is not sufficient for atomic operations
• Semaphore operations must themselves be implemented in critical sections
• Queuing and dequeuing PCB’s must also be implemented in critical sections
• Other control operations need protection
• These problems all have solutions but need deeper thought!
ConcurrencyCS-502 Fall 2007 59
Summary
• Interrupts transparent to processes• wait_s() and post_s() behave as if they are
atomic• Device handlers and all other OS services
can be embedded in processes• All processes behave as if concurrently
executing• Fundamental underpinning of all modern
operations systems
ConcurrencyCS-502 Fall 2007 60
Questions?