30
Multiple-Processor Scheduling Multiprocessor and Realtime processor Scheduling 1 CS8493

Multiple-Processor Scheduling - SNS Courseware

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Multiple-Processor Scheduling

Multiprocessor and Realtime processor Scheduling 1 CS8493

Multiple-Processor Scheduling •  CPU scheduling more complex when multiple CPUs are

available. •  Homogeneous core systems are identical, they can perform

exactly the same tasks and have exactly the same capabilities available. The cores are functionally identical.

•  Asymmetric multiprocessing •  Symmetric multiprocessing (SMP)

2

Approaches to Multiple-Processor Scheduling

Multiprocessor and Realtime processor Scheduling CS8493

Asymmetric multiprocessing •  Only one processor (master server) accesses the system data

structures, alleviating the need for data sharing •  All scheduling decisions, I/O processing, and other system

activities handled by it.

3 Multiprocessor and Realtime processor Scheduling CS8493

Symmetric multiprocessing (SMP) •  Each processor is self-scheduling, all processes in common ready

queue, or each has its own private queue of ready processes –  Currently, most common

4 Multiprocessor and Realtime processor Scheduling CS8493

SMP with global queue

5 Multiprocessor and Realtime processor Scheduling CS8493

SMP with per CPU queue

6 Multiprocessor and Realtime processor Scheduling CS8493

SMP Load Balancing

If SMP, need to keep all CPUs loaded for efficiency Load balancing attempts to keep workload evenly distributed •  Push migration – A periodic task checks load on each

processor, and redistributes work when it finds imbalance •  Pull migration – idle processors pull a waiting task from a

busy processor

Multiprocessor and Realtime processor Scheduling 7 CS8493

Processor affinity

•  If the process migrates to another processor the contents of cache memory invalidated for the first processor, and the cache for the second processor must be repopulated.

•  High cost due to migration.

•  Processor affinity – process has affinity for processor on which it is currently running ie SMP system avoid migration –  soft affinity: OS attempt to keep a process running on the same

processor but no guarantee. –  hard affinity: Allowing a process not to migrate to other

processors.

Multiprocessor and Realtime processor Scheduling 8 CS8493

NUMA and CPU Scheduling

Note that memory-placement algorithms can also consider affinity

CPU

fast access

memory

CPU

fast accessslow access

memory

computer

Multiprocessor and Realtime processor Scheduling 9

•  Main-memory architecture can affect processor affinity issues. •  Figure illustrates an architecture featuring non-uniform memory access

(NUMA), in which a CPU has faster access to some parts of main memory than to other parts.

•  Scheduler and memory replacement algorithm should work together to keep the process in a particular CPU.

CS8493

Multicore Processors •  SMP systems have allowed several threads to run concurrently by

providing multiple physical processors. •  Recent trend to place multiple processor cores on same physical

chip resulting in a multicore processor. •  Faster and consumes less power •  When a processor accesses memory, it spends a significant

amount of time waiting for the data(cache miss etc) to become available. This situation is known as a memory stall.

Multiprocessor and Realtime processor Scheduling 10

•  A thread is a path of execution within a process. •  Processor spend 50% time in waiting for data.

CS8493

Multithreaded Multicore System

Multiprocessor and Realtime processor Scheduling 11

Multiple threads per core also growing Takes advantage of memory stall to make progress on another thread while memory retrieve happens

The UltraSPARC T3 CPU has sixteen cores per chip and eight hardware threads per core. From the perspective of the operating system, there appear to be 16 x 8=128 logical processors.

CS8493

Multithreaded Multicore System

Multithreaded multicore processor requires two different levels of scheduling. 1.  One level are the scheduling decisions that must be made by

the operating system as it chooses which software thread to run on each hardware thread (logical processor).

2.  Second level of scheduling specifies how each core decides which hardware thread to run.

Multiprocessor and Realtime processor Scheduling 12 CS8493

Assessment

A multiprocessor operating system should perform a. Mechanism to split a task into concurrent subtasks b. Optimize the system performance c. Handling structural or architectural changes d. All of the mentioned

CS8493 Multiprocessor and Realtime processor Scheduling 13

Assessment

Symmetric multiprocessing architecture of computer system uses share

a.  Bus b.  Memory c.  Processors d.  Both a and b

CS8493 Multiprocessor and Realtime processor Scheduling 14

Multiprocessor and Realtime processor Scheduling 15

Scheduling

CS8493

Real-Time CPU Scheduling

•  System which are strict deadly time bound are called Real time Systems

Ex: Flight control Systems, real time monitors •  Can present obvious challenges

Soft real-time systems – A delay for a small amount of time is acceptable.

–  Process will be given preference over noncritical processes. Ex: Online Transaction system

Hard real-time systems – task must be serviced by its deadline •  Service after the deadline has expired is the same as no service at all Ex: Aircraft System

Multiprocessor and Realtime processor Scheduling 16 CS8493

Minimizing Latency

•  Events may arise either in software —as when a timer expires—or in hardware—as when a remote-controlled vehicle detects that an obstruction.

•  When an event occurs, the system must respond to and service it quickly.

•  Event latency is the amount of time that elapses from when an event occurs to when it is serviced.

Multiprocessor and Realtime processor

Scheduling 17

Event Latency

CS8493

Minimizing Latency

•  Two types of latencies affect the performance 1.  Interrupt latency – time from arrival of

interrupt to start of routine that services interrupt (ISR)

2. Dispatch latency – time for scheduler to take current process off CPU and switch to another process

Multiprocessor and Realtime processor Scheduling 18 CS8493

Interrupt Latency

Minimizing Latency (Cont.)

•  Most effective technique for keeping dispatch latency low is to provide preemptive kernels so real time tasks access CPU immediately

•  Conflict phase of dispatch latency: 1. Preemption of any process

running in kernel mode 2. Release by low-priority

process of resources needed by high-priority processes

response to event

real-time process

execution

event

conflicts

time

dispatch

response interval

dispatch latency

process made availableinterrupt

processing

Multiprocessor and Realtime processor Scheduling

19 CS8493

Dispatch Latency

Priority-based Scheduling

•  For real-time scheduling, scheduler must support preemptive, priority-based scheduling –  But only guarantees soft real-time

•  For hard real-time must also provide ability to meet deadlines •  Processes have new characteristics: periodic ones require CPU

at constant intervals o  Has fixed processing time t, deadline d, period p o  0 ≤ t ≤ d ≤ p

o  Rate of periodic task is 1/p

20 Periodic task CS8493

Priority-based Scheduling (Cntd)

•  Process may have to announce its deadline requirements to the scheduler.

•  Using admission-control algorithm, the scheduler does one of two things. 1.  Either admits the process, guaranteeing that the process

will complete on time 2.  Or rejects the request as impossible if it cannot guarantee

that the task will be serviced by its deadline.

Multiprocessor and Realtime processor Scheduling 21 CS8493

Rate Monotonic Scheduling

•  An optimal(if a set of processes cannot be scheduled by this algorithm, it cannot be scheduled by any other algorithm that assigns static priorities) preemptive scheduling algorithm with static priority

•  A priority is assigned based on the inverse of its period •  Shorter periods = higher priority; •  Longer periods = lower priority •  The idea is to assign a higher priority to tasks that require the

CPU more often.

Multiprocessor and Realtime processor Scheduling 22 CS8493

Rate Monotonic Scheduling

Multiprocessor and Realtime processor Scheduling 23

Example: Process Exe. Time Period P1 20 50 P2 35 100

Deadline for each process requires that it complete its CPU burst by the start of its next period. ie LCM(50,100) = 100 •  Priority will be P1>P2 since P1’s period is smaller than P2

•  CPU utilization of a process Pi is the ratio of its burst to its period — ti / pi. P1 is 20/50 = 0.40 and that of P2 is 35/100 = 0.35, for a total CPU utilization of 75 percent. CS8493

Missed Deadlines with Rate Monotonic Scheduling

Multiprocessor and Realtime processor Scheduling 24

Example: Process Exe. Time Period P1 25 50 P2 35 80

P1 a higher priority, as it has the shorter period. CPU utilization of the two processes is (25/50)+(35/80) = ?

•  P2 misses the deadline for completion of its CPU burst at time 80. •  Worst-case CPU utilization for scheduling N processes is N(21/N − 1). •  For more: https://www.youtube.com/watch?v=Cv_5aoKXc3g CS8493

Earliest Deadline First Scheduling (EDF)

•  Earliest-deadline-first (EDF) scheduling dynamically assigns priorities according to the deadline.

•  The earlier the deadline, the higher the priority; the later the deadline, the lower the priority

•  Priorities may have to be adjusted to reflect the deadline of the newly runnable process.

•  Again schedule the same processes which failed to meet deadline requirements under rate-monotonic scheduling.

Multiprocessor and Realtime processor Scheduling 25

For more: https://www.youtube.com/watch?v=mk2ldyjKtBc

CS8493

EDF Scheduling Example

Example: Process Exe. Time Period P1 25 50 P2 35 80

•  Process P1 has the earliest deadline 50, so its initial priority is

higher than that of process P2 whose deadline is 80. •  P2 begins running even after period 50 since it has a higher

priority than P1 because its next deadline (at time 80) is earlier than P1 (at time 100).

Multiprocessor and Realtime processor Scheduling 26 CS8493

EDF Scheduling

•  Appeal of EDF scheduling is that it is theoretically optimal—CPU utilization will be 100 percent.

•  In practice, however, it is impossible to achieve this level of CPU utilization due to the cost of context switching between processes and interrupt handling.

Multiprocessor and Realtime processor Scheduling 27 CS8493

Proportional Share Scheduling

•  T shares are allocated among all processes in the system

•  An application receives N shares where N < T

•  This ensures each application will receive N / T of the total processor time

Example: T = 100 shares is to be divided among three processes, A, B , and C . •  A is assigned 50 shares, B -15 shares, and C - 20 shares. This scheme

ensures that A will have 50 percent of total processor time, B will have 15 percent, and C will have 20 percent.

•  It works in conjunction with an admission-control policy to guarantee that an application receives its allocated shares of time.

•  New request will be admitted only the availability of sufficient share. Multiprocessor and Realtime processor

Scheduling 28 CS8493

POSIX Real-Time Scheduling

•  POSIX - Portable Operating System Interface •  API provides functions for managing real-time threads •  Defines two scheduling classes for real-time threads: 1.  SCHED_FIFO - threads are scheduled using a FCFS strategy

with a FIFO queue. There is no time-slicing for threads of equal priority

2.  SCHED_RR - similar to SCHED_FIFO except time-slicing occurs for threads of equal priority

•  Defines two functions for getting and setting scheduling policy: 1.  pthread_attr_getsched_policy(pthread_attr_t *attr, int *policy) 2.  pthread_attr_setsched_policy(pthread_attr_t *attr, int policy)

Multiprocessor and Realtime processor Scheduling 29 CS8493

Source of Reference

•  Textbook •  https://www.youtube.com/watch?v=aKjDqOguxjA

Multiprocessor and Realtime processor Scheduling 30 CS8493