54
Resource Sharing in Real-Time Uni/Multiprocessor Embedded Systems Sara Afshar [email protected]

Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems

Embed Size (px)

Citation preview

Resource Sharing in Real-Time Uni/Multiprocessor Embedded Systems

Sara [email protected]

Stockhol

m

Umeå

Göteborg

Malmö

VästeråsEskilstuna

About Me!

• BSc. in Electrical Engineering in Tabriz University

• Master in Intelligent Embedded Systems

• Mälardalen University, Västerås, Sweden

• Currently, PhD candidate (3rd year)

• Research topic: Resource sharing in real-time multiprocessors

About MRTC!

Professors: 29Senior researchers: 52PhD Students: 76Research Groups: 16

Reference: www.es.mdh.se

Outline

• Embedded systems

• Real-time systems

• Scheduling and timing analysis

• Resource sharing in uniprocessors

• Multiprocessors

• Open problems

Daily Computers

• A special-purpose computer

• Computers perform the functionality

Embedded Systems

A special-purpose computer that performs a few dedicated functions, usually with very specific requirements.

Real-Time Systems

Embedded systems:– Specialized

– Efficient

But in many cases it is not enough. The system has to react to the environment at the right time instance.

Timing Requirements

Real-Time System

Non-Real-Time Systems

Real-Time Systems

The system does the right thing

The system does the right thing It is on-time

&

Example

Airbag example (a very classical example!)

timeToo early Too late

Collision

Real-Time ≠ FastReal-Time = Predictable

Hard vs. Soft Real-Time

Each program has a deadline which should not be missed.

Hard Real-Time– Missing the deadline cause catastrophe

– E.g., automotive, airplane, industrial control

Soft-Real-Time– Can miss some deadlines

– E.g., TV, video streaming

Real-Time Tasks

• Program is written by means of different tasks

• On a single-core processor two tasks cannot execute in parallel– Some tasks are preempted in order that all tasks execute on time

– Scheduler is responsible to schedule tasks in a processor

sens

Task A

sens sens sens sens

sens sens sens sens sens

Real-Time Tasks

Periodic Tasks: repeat in periodic intervals

e.g., control loops, sensor reading, etc.

Execute Execute ExecuteSleeping Sleeping

𝑇𝑖 𝑇𝑖

Real-Time Tasks

Aperiodic Task: tasks may arrive at any point in time

e.g., alarm tasks, emergency button, etc.

Execute

The task executes once

An interruptevent occurs, e.g., a buttonis pushed!

Task may be trigerredagain at somepoint in time!

Real-Time Tasks

Sporadic Task: Similar to aperiodic tasks, however, minimum time for task’s next activation is known

e.g., task handling keyboard input- minimum time between pressing two keys known

Minimum inter arrival time = known

Execute𝑇𝑖Execute

?

Scheduling

The process of deciding the execution order of real-time tasks, depends of the priority of the task.

There are different mechanism to do that.

Task A misses its deadline Both deadlines are met

Scheduling algorithms

Scheduling algorithms

• Fixed Priority

– RM: smaller priod higher priority

– DM: smaller deadline higher priority

• Dynamic Priority

– EDF: earliest deadline first

Task Model

𝑎𝑖

𝐷𝑖

𝑓𝑖𝑇𝑖

• Arrival time (release time): 𝑎𝑖 = 1

• Execution time: 𝐶𝑖 = 3

• Finishing time: 𝑓𝑖 =7

• Deadline: 𝐷𝑖 = 7

• Period: 𝑇𝑖 = 8

• Response time: 𝑅𝑖 = 𝑓𝑖 − 𝑎𝑖 = 6

𝜏𝑖 𝜏𝑖 𝜏𝑖

Response Time Analysis

A A A A A

H H H H

Response Time of Task A

Ready time of A

Finishing time of A

W𝑅𝐴 = 𝐶𝐴 + ∀𝑗∈ℎ𝑝(𝐴)𝑊𝑅𝐴

𝑇𝑗× 𝐶𝑗

Interference from higherpriority tasks

Worst-case response time

Resource Sharing

• Some tasks are independent

• Some tasks are aware of each other– E.g., using a shared memory

– E.g., two tasks writing in a same buffer

Resource Sharing

• Tasks may use hardware/software componentssuch as a database, hard-drive, sensor etc.

PROBLEM

critical section = part of the task execution that access to a resource

• In real-time systems, semaphore-based lockingsynchronization techniques handles mutually-exclusive access to resources among tasks– Every task that wants to use a resource first has to lock the resource,

use it and then unlock (release) the resource

Resource Sharing

critical section

Resource Sharing

• Tasks may experience delay due to resource sharing– E.g., the task that needs the same resource has to wait for the resource to

be released by other tasks

critical sectionBLOCK

Resource Sharing

• Blocking can endanger system correctness– Priority inversion: a high priority task (in this example task 1) is

forced to wait for a lower priority task (in this example task 2) for an unbounded amount of time

http://www.idt.mdh.se/kurser/CDT315/index.php?choice=contents

critical section

high priority task already missed itsdeadline due to extra waiting for

middle prio. task

Task 1: priority = Low

Task 2 priority = Middle

Task 3: priority = High

deadline

extra delay due to normal executionof middle prio. task which can be considerably long compared to

critical sections

high prio. task requests the same resource which is not available and

is blocked

high prio. task arrives and preempt the low priority task

middle prio. task arrives and preempts

low priority task

low prio. task continues

middle prio. task finishes

low prio. task continues and then releases th resource

Resource Sharing

• Blocking can endanger system correctness– Priority inversion: high priority task is forced to wait for a lower

priority task for an unbounded amount of time

• Mars Pathfinder– Landing on July 4, 1997

– Pathfinder experiences repeated resets after staring gathering of meteorological data.

– Resets caused by timing overruns when using shared communication buss- a classical case of priority inversion problem.

PIP

• Priority Inheritance Protocol (PIP)– High priority task cannot be delayed by middle priority task

http://www.idt.mdh.se/kurser/CDT315/index.php?choice=contents

critical section

Task 1: priority = Low

Low priority task inherits priotiy ofhigh priority task

Middle priority task arrives

prio. = Low prio. = High prio. = Low

Task 2 priority = Middle

Task 3: priority = High

Low priority task gets back its own

priotiy

: high priority task meets its deadline

deadline

Synchronization Protocols

• PIP: priority inversion

• PCP: deadlock, chain blocking

• IPIP, IPCP, SRP: blocking only in the beginning

Ready time of A

• By enabling resource sharing, a blocking term is added to the worst-case response time of a task

Response Time Analysis

A A A A A

H H H H

Response Time of Task AFinishing time of A

L

W𝑅𝐴 = 𝐶𝐴 + 𝐵𝐴 + ∀𝑗∈ℎ𝑝(𝐴)𝑊𝑅𝐴

𝑇𝑗× 𝐶𝑗

Blocking incurred to task A

Power Wall Problem

[Patterson & Hennessy]

Multiprocessors

• Integration of multiple processors on a chip

• Multiprocessors platforms have become popular in the industry– Power consumption

– Performance

Multiprocessors

• Integration of multiple processors on a chip

• Multiprocessors platforms have become popular in the industry– Power consumption

– Performance

• Migrating to multiprocessor technology– Immature scheduling and synchronization techniques

– Over simplification

Multiprocessor Scheduling

• Partitioned scheduling:– Tasks are fixed assigned to processors in design time

– Each processor has its own scheduler and ready queue

– Task migration among processors is not allowed in run-time

local ready queuesper-processor schedulers

processors

Multiprocessor Scheduling

• Global scheduling– Only one scheduler and one ready queue

– Scheduler assign tasks to processors in run-time

– Task migration among processors is allowed

global ready queueglobal scheduler

processors

Multiprocessor Scheduling

• Hybrid scheduling, semi-partitioned scheduling– Combination of partitioned and global scheduling

• Most tasks are fixed assigned to processor in design time

• A few tasks can migrate among processors in run-time

– Benefits from advantages of both approaches

task partitions local ready queuesper-processor schedulers

processors

partitioned task

migrating task

Resource Sharing

critical section

L

M

H

P1

P2

• More complex problem in multiprocessors– Remote Blocking is added besides local blocking

deadline

!: high priority task misses its deadline

Response Time Analysis

W𝑅𝑖 = 𝐶𝑖 + 𝐵𝑖 + ∀𝑗∈ℎ𝑝(𝑖)𝑊𝑅𝑖

𝑇𝑗× 𝐶𝑗

• Two type of blocking in multiprocessors:– Local blocking

– Remote blocking

𝐵𝑖𝐿 𝐵𝑖

𝑅

Synchronization Protocol

• Various synchronization protocols:– MSRP: P-EDF

– M-BWI: G-EDF

– FMLP short: P/G-EDF

– MrsP: P-FP

– MPCP: P-RM

– FMLP long: P/G-EDF

– OMLP: P/G-EDF, P-FP

– MSOS: P-FP

Synchronization Protocol

• Queue type: access priority to the resource

– FIFO, LIFO, PR-Based, Hybrid

• Task behavior during waiting

– Spin-based, suspension-based

• Task priority changes

– Inherited, boosted

Synchronization Protocol

• These protocols proposed for global and partitioned scheduling

• No protocol for hybrid (semi-partitioned) scheduling

Our Contributions

4

0

Semi-Partitioned Scheduling

• Partitioned tasks: tasks that are fixed assigned to processors and execute only on those processors, i.e., they fit (utilization-wise) on processors during partitioning (𝜏1, … , 𝜏8)

• Migrating tasks: Task(s) that do(es) not fit on any processor (𝜏9)

𝜏1

𝜏2

𝜏3

𝜏4

𝜏5

𝜏6

𝜏7

𝑃1 𝑃2 𝑃3

𝜏8

𝜏9

Semi-Partitioned Scheduling

• Migrating tasks split among processors which can provide capacity remained from partitioning (𝜏9 splits among processors 1 to 3)

𝜏1

𝜏2

𝜏3

𝜏4

𝜏5

𝜏6

𝜏7

𝑃1 𝑃2 𝑃3

𝜏8

𝜏9

Resource Sharing

• Variation in execution time of tasks • May cause critical sections to happen at any point during

task execution

• In case of semi-partitioned scheduling, critical sections may happen in any part of a split task and therefore on any processor that the task is split over

cs1

P1 P2

cs2

P3

cs2

P1 P2

cs1

P3 P1 P2

cs1

P3

cs2

cs1

Case 1 Case 2 Case 3

Resource Sharing

• Therefore, in semi-partitioned scheduling, existing synchronization protocols cannot be used directly

cs1

P1 P2

cs2

P3

cs2

P1 P2

cs1

P3 P1 P2

cs1

P3

cs2

cs1

Case 1 Case 2 Case 3

Centralized Solution

• Critical sections migrate to marked processor

• Advantages:– Centralize resource access

– Remote blocking on marked processor

• Disadvantages:– Extra migration overhead

P1 P2

Rs

1

i 2

i

marked processor

2

i 2

i

2

i

P2

P1

migration overhead

non-split tasks

split task

Decentralized Solution

• Critical sections served where they occur

• Advantages:– Decreased migration overhead

• Disadvantages:– Introduced blocking to local tasks

– Increases remote blocking non-split tasks

split task

P1 P2

Rs

1

i 2

i

2

i 2

iP2

P1Rs

1

i 1

i

Analysis

• Local blocking due to local resources

𝐵𝑖,1 = min{𝑛𝑖𝐺 + 1,

𝜌𝑖<𝜌𝑗

𝑇𝑖

𝑇𝑗+ 1 𝑛𝑗

𝐿 (𝜏𝑖)} max𝜌𝑖<𝜌𝑗

𝜋𝑖,𝜋𝑗∈𝑃𝑘

𝑅𝑙∈𝑅𝑃𝑘𝐿

𝜌𝑖≤𝑐𝑒𝑖𝑙(𝑅𝑙)

{𝐶𝑠𝑗,𝑙}

𝑐𝑒𝑖𝑙 𝑅𝑙 = max{𝜌2 𝜏2 ∈ 𝜏𝑙,1}

Analysis

• Local blocking due to global resources

𝐵𝑖,2 = ∀𝜌𝑗<𝜌𝑖

𝜏𝑖,𝜏𝑗𝜖𝑃𝑘

min{𝑛𝑖𝐺 + 1, (

𝑇𝑖𝑇𝑗

+ 1)𝑛𝑗𝐺} max

𝑅𝑞∈𝑅𝑃𝑘𝐺

{𝐶𝑠𝑗,𝑞}

Analysis

• Remote blocking due to lower priority tasks

𝐵𝑖,3 =

∀𝑅𝑞∈𝑅𝑃𝑘𝐺

𝜏𝑖𝜖𝜏𝑞,𝑘

𝑛𝑖,𝑞𝐺 max

∀𝜌𝑗<𝜌𝑖

𝜏𝑗𝜖𝜏𝑞,𝑟

𝑘≠𝑟

{𝐶𝑠𝑗,𝑞 + 𝜌ℎ,𝑗 𝑅𝑞′ max

𝜏𝑡∈𝑃𝑟 𝜌𝑡>𝜌𝑗

𝑅𝑠∈𝑅𝑟𝐺

𝑠≠𝑞

{𝐶𝑠𝑡,𝑠}}

Analysis

• Remote blocking due to higher priority tasks

𝐵𝑖,4 =

∀𝑅𝑞∈𝑅𝑃𝑘𝐺

𝜏𝑖𝜖𝜏𝑞,𝑘

∀𝜌𝑗>𝜌𝑖

𝜏𝑗𝜖𝜏𝑞,𝑟

𝑘≠𝑟

𝑛𝑗,𝑞𝐺 (

𝑇𝑖𝑇𝑗

+ 1)(𝐶𝑠𝑗,𝑞 + 𝜌ℎ,𝑗 𝑅𝑞′ max

𝜏𝑡∈𝑃𝑟 𝜌𝑡>𝜌𝑡𝑗

𝑅𝑠∈𝑅𝑟𝐺

𝑠≠𝑞

{𝐶𝑠𝑡,𝑠})

Evaluation results

(a) Overhead = 0 µs

(b) Overhead = 140 µs

0,0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1,0

5 25 45 65 85 105 125 145 165 185 205

Sch

ed

ula

bili

ty

Critical Section Length (µs)

MLPS

NMLPS

0,0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1,0

5 25 45 65 85 105 125 145 165 185 205

Sch

ed

ula

bili

ty

Critical Section Length (µs)

MLPS

NMLPS

: centralized solution

: decentralized solution

: centralized solution

: decentralized solution

Details in Paper

“A Resource Sharing under Multiprocessor Semi-Partitioned Scheduling.”Sara Afshar, Farhang Nemati, Thomas Nolte. In proceedings of the 18th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), 2012, August.

Open Problems

• Improving synchronization techniques

– Improving of analysis

– Improving of protocols

• Blocking aware partitioning

• Compositional scheduling

THANK YOU!

54