8
1-4244-1506-3/08/$25.00 ©2008 IEEE Scheduling of Semi-Independent Real-Time Components: Overrun Methods and Resource Holding Times Moris Behnam, Insik Shin, Thomas Nolte, Mikael Nolin alardalen Real-Time Research Centre (MRTC) alardalen University P.O. Box 883, SE-721 23 V¨ aster˚ as, Sweden email: [email protected] Abstract The Hierarchical Scheduling Framework (HSF) has been introduced as a design-time framework enabling compositional schedulability analysis of embedded soft- ware systems with real-time properties. In this paper a system consists of a number of semi-independent compo- nents called subsystems. Subsystems are developed inde- pendently and later integrated to form a system. To sup- port this design process, our proposed methods allow non- intrusive configuration and tuning of subsystem timing- behaviour via subsystem interfaces for selecting schedul- ing parameters. This paper considers two methods to handle over- runs due to resource sharing between subsystems in the HSF. We present the scheduling algorithms for overruns and their associated schedulability analysis, together with analysis that shows under what circumstances one or the other overrun method is preferred. Furthermore, we show how to calculate resource-holding times within our frame- work. 1 Introduction The Hierarchical Scheduling Framework (HSF) has been introduced to support hierarchical resource sharing among applications under different scheduling services. The hierarchical scheduling framework can be generally represented as a tree of nodes, where each node represents an application with its own scheduler for scheduling inter- nal workloads (e.g., threads), and resources are allocated from a parent node to its children nodes. The HSF provides means for decomposing a complex system into well-defined parts. In essence, the HSF pro- vides a mechanism for timing-predictable composition of course-grained components or subsystems. In the HSF a subsystem provides an introspective interface that speci- fies the timing properties of the subsystem precisely [24]. The work in this paper is supported by the Swedish Foundation for Strategic Research (SSF), via the research programme PROGRESS. This means that subsystems can be independently devel- oped and tested, and later assembled without introducing unwanted temporal behaviour. Also, the HSF facilitates reusability of subsystems in timing-critical and resource constrained environments, since the well defined inter- faces characterize their computational requirements. Earlier efforts have been made in supporting composi- tional subsystem integration in the HSFs, preserving the independently analyzed schedulability of individual sub- systems. One of the common assumptions shared by ear- lier studies is that subsystems are independent. This pa- per relaxes this assumption by addressing the challenge of enabling efficient compositional integration for inde- pendently developed semi-independent subsystems inter- acting through sharing of logical resources. Here, semi- independence means that subsystems are allowed to syn- chronize by the sharing of logical resources. To enable sharing of logical resources in HSFs, Davis and Burns proposed the overrun mechanism that allows the subsystem to overrun (its budget) to complete the ex- ecution of the critical section [9]. The study provided schedulability analysis for this mechanism; however, it does not allow independent analysis of individual subsys- tems. Hence, this schedulability analysis does not support composability of subsystems. For Davis and Burns’ over- run mechanism, we have presented schedulability analysis supporting composability in [7] and in addition, we also presented another overrun mechanism (enhanced over- run mechanism) that potentially increases schedulability within a subsystem by providing CPU allocations more efficiently. The main contributions of this paper are twofold. The first contribution of this paper is the comparison analysis showing under what circumstances one [9] or the other overrun method [7] is preferred. The second contribution of this paper is the presentation of, for the periodic virtual processor model that is used throughout the paper, how to calculate bounds on the resource holding time. The re- source holding time represents the time during which a subsystem may lock a shared resource, and it plays a key role in defining the interface of a subsystem in this paper. 575

[IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

  • Upload
    mikael

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

1-4244-1506-3/08/$25.00 ©2008 IEEE

Scheduling of Semi-Independent Real-Time Components:Overrun Methods and Resource Holding Times∗

Moris Behnam, Insik Shin, Thomas Nolte, Mikael NolinMalardalen Real-Time Research Centre (MRTC)

Malardalen UniversityP.O. Box 883, SE-721 23 Vasteras, Sweden

email: [email protected]

Abstract

The Hierarchical Scheduling Framework (HSF) hasbeen introduced as a design-time framework enablingcompositional schedulability analysis of embedded soft-ware systems with real-time properties. In this paper asystem consists of a number of semi-independent compo-nents called subsystems. Subsystems are developed inde-pendently and later integrated to form a system. To sup-port this design process, our proposed methods allow non-intrusive configuration and tuning of subsystem timing-behaviour via subsystem interfaces for selecting schedul-ing parameters.

This paper considers two methods to handle over-runs due to resource sharing between subsystems in theHSF. We present the scheduling algorithms for overrunsand their associated schedulability analysis, together withanalysis that shows under what circumstances one or theother overrun method is preferred. Furthermore, we showhow to calculate resource-holding times within our frame-work.

1 Introduction

The Hierarchical Scheduling Framework (HSF) hasbeen introduced to support hierarchical resource sharingamong applications under different scheduling services.The hierarchical scheduling framework can be generallyrepresented as a tree of nodes, where each node representsan application with its own scheduler for scheduling inter-nal workloads (e.g., threads), and resources are allocatedfrom a parent node to its children nodes.

The HSF provides means for decomposing a complexsystem into well-defined parts. In essence, the HSF pro-vides a mechanism for timing-predictable composition ofcourse-grained components or subsystems. In the HSF asubsystem provides an introspective interface that speci-fies the timing properties of the subsystem precisely [24].

∗The work in this paper is supported by the Swedish Foundation forStrategic Research (SSF), via the research programme PROGRESS.

This means that subsystems can be independently devel-oped and tested, and later assembled without introducingunwanted temporal behaviour. Also, the HSF facilitatesreusability of subsystems in timing-critical and resourceconstrained environments, since the well defined inter-faces characterize their computational requirements.

Earlier efforts have been made in supporting composi-tional subsystem integration in the HSFs, preserving theindependently analyzed schedulability of individual sub-systems. One of the common assumptions shared by ear-lier studies is that subsystems are independent. This pa-per relaxes this assumption by addressing the challengeof enabling efficient compositional integration for inde-pendently developed semi-independent subsystems inter-acting through sharing of logical resources. Here, semi-independence means that subsystems are allowed to syn-chronize by the sharing of logical resources.

To enable sharing of logical resources in HSFs, Davisand Burns proposed the overrun mechanism that allowsthe subsystem to overrun (its budget) to complete the ex-ecution of the critical section [9]. The study providedschedulability analysis for this mechanism; however, itdoes not allow independent analysis of individual subsys-tems. Hence, this schedulability analysis does not supportcomposability of subsystems. For Davis and Burns’ over-run mechanism, we have presented schedulability analysissupporting composability in [7] and in addition, we alsopresented another overrun mechanism (enhanced over-run mechanism) that potentially increases schedulabilitywithin a subsystem by providing CPU allocations moreefficiently.

The main contributions of this paper are twofold. Thefirst contribution of this paper is the comparison analysisshowing under what circumstances one [9] or the otheroverrun method [7] is preferred. The second contributionof this paper is the presentation of, for the periodic virtualprocessor model that is used throughout the paper, howto calculate bounds on the resource holding time. The re-source holding time represents the time during which asubsystem may lock a shared resource, and it plays a keyrole in defining the interface of a subsystem in this paper.

575

Page 2: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

The outline of the paper is as follows: Section 2presents related work, while Section 3 presents our sys-tem model. In Section 4 we present the schedulabilityanalysis for the model. Section 5 presents the two over-run mechanisms and Section 6 presents their analyticalcomparison. In Section 7 we show how to calculate theresource holding times, and finally, Section 8 concludes.

2 Related work

This section presents related work in the areas of HSFsas well as resource sharing protocols.

2.1 Hierarchical schedulingThe HSF for real-time systems, originating in open

systems [10] in the late 1990’s, has been receiving anincreasing research attention. Since Deng and Liu [10]introduced a two-level HSF, its schedulability has beenanalyzed under fixed-priority global scheduling [15] andunder EDF-based global scheduling [16, 18]. Mok etal. [20] proposed the bounded-delay resource model soas to achieve a clean separation in a multi-level HSF,and schedulability analysis techniques [12, 25] have beenintroduced for this resource model. In addition, Shinand Lee [24, 26] introduced another periodic resourcemodel (to characterize the periodic resource allocationbehaviour), and many studies have been proposed onschedulability analysis with this resource model underfixed-priority scheduling [22, 17, 8] and under EDFscheduling [24]. More recently, Easwaran et al. [11] intro-duced Explicit Deadline Periodic (EDP) resource model.However, a common assumption shared by all the studiesin this paragraph is that tasks are required to be indepen-dent.

2.2 Resource sharingIn many real systems, tasks are semi-independent,

interacting with each other through mutually exclusiveresource sharing. Many protocols have been intro-duced to address the priority inversion problem for semi-independent tasks, including the Priority Inheritance Pro-tocol (PIP) [23], the Priority Ceiling Protocol (PCP) [21],and Stack Resource Policy (SRP) [3]. Recently, Fisheret al. addressed the problem of minimizing the resourceholding time [14] under SRP. There have been studieson extending SRP for HSFs, for sharing of logical re-sources within a subsystem [2, 15] and across subsys-tems [9, 6, 13]. Davis and Burns [9] proposed the Hier-archical Stack Resource Policy (HSRP) supporting shar-ing of logical resources on the basis of an overrun mech-anism. Behnam et al. [6] proposed the Subsystem Inte-gration and Resource Allocation Policy (SIRAP) proto-col that supports subsystem integration in the presence ofshared logical resources, on the basis of skipping. Fisheret al. [13] proposed the BROE server that extends theConstant Bandwidth Server (CBS) [1] in order to han-dle sharing of logical resources in a HSF. Lipari et al.

Subsystem1 Subsystem2

Localscheduler

Localscheduler

Subsystemu

R1R1

Localscheduler

Localscheduler

Localscheduler

Localscheduler

R2R2 RmRm

Globalscheduler

Globalscheduler

Figure 1. Two-level HSF with resource shar-ing.

proposed the BandWidth Inheritance protocol (BWI) [19]which extends the resource reservation framework to sys-tems where tasks can share resources. The BWI approachis based on using the CBS algorithm and a technique thatis derived from the Priority Inheritance Protocol (PIP).Particularly, BWI is suitable for systems where the exe-cution time of a task inside a critical section can not beevaluated.

3 System model and background

3.1 Resource sharing in the HSFThe Hierarchical Scheduling Framework (HSF) has

been introduced to support CPU time sharing among ap-plications (subsystems) under different scheduling poli-cies. In this paper, we consider a two level-hierarchicalscheduling framework which works as follows: a global(system-level) scheduler allocates CPU time to subsys-tems, and a local (subsystem-level) scheduler subse-quently allocates CPU time to its internal tasks.

Having such a HSF also allows for the sharing of logi-cal resources among tasks in a mutually exclusive manner(see Figure 1). Specifically, tasks can share local logi-cal resources within a subsystem as well as global logi-cal resources across (in-between) subsystems. However,note that this paper focuses around mechanisms for shar-ing of global logical resources in a HSF while local logicalresources easily can be supported by traditional synchro-nization protocols such as SRP (see, e.g., [2, 9, 15]).

3.2 Virtual processor modelsThe notion of real-time virtual processor (resource)

model was first introduced by Mok et al. [20] to char-acterize the CPU allocations that a parent node providesto a child node in a HSF. The CPU supply of a virtualprocessor model refers to the amounts of CPU allocationsthat the virtual processor model can provide. The supplybound function of a virtual processor model calculates itsminimum possible CPU supply for any given time intervalof length t.

The periodic virtual processor model Γ(P, Q) was pro-posed by Shin and Lee [24] to characterize periodic re-source allocations, where P is a period (P > 0) and Q is

576

Page 3: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

0 1 2 3 4 5 6 7 8 9 10 t

sbf(

t)

P

Q

P P P

Q QQ

(k-1)PBD = 2P-2Q

Figure 2. The supply bound function of a pe-riodic virtual processor model Γ(3, 2).

a periodic allocation time (0 < Q ≤ P ). The capacity UΓ

of a periodic virtual processor model Γ(P, Q) is definedas Q/P .

The supply bound function sbfΓ(t) of the periodic vir-tual processor model Γ(P, Q) was given in [24] to com-pute the minimum resource supply during an interval oflength t. Further, in this paper, we rephrase it with an ad-ditional parameter of BD, where BD represents its longestpossible blackout duration during which the periodic vir-tual processor model may provide no resource allocationat all.

sbfΓ(t, BD) ={

t − (k − 1)(P − Q) − BD if t ∈ W (k)

(k − 1)Q otherwise,(1)

where k = max(⌈(

t+(P−Q)−BD)/P

⌉, 1

)and W (k)

denotes an interval [(k−1)P +BD, (k−1)P +BD+Q].Here, we first note that the original sbfΓ(t) in [24] isequivalent to sbfΓ(t, BD) when BD = 2(P − Q). Wealso note that an interval of length t may not begin syn-chronously with the beginning of period P ; as shown inFigure 2, the interval of length t can start in the middle ofthe period of a periodic virtual processor model Γ(P, Q).Figure 2 illustrates the supply bound function sbfΓ(t) ofthe periodic virtual processor model.

3.3 Stack resource policy (SRP)To be able to use SRP [3] in the HSF, its associated

terms are extended as follows:

• Preemption level. Each task τi has a preemption levelequal to πi = 1/Di, where Di is the relative dead-line of the task. Similarly, each subsystem Ss hasan associated preemption level equal to Πs = 1/Ps,where Ps is the subsystem’s per-period deadline.

• Resource ceiling. Each globally shared resource Rj

is associated with two types of resource ceilings;one internal resource ceiling for local schedulingrcj = max{πi|τi accesses Rj} and one external re-source ceiling for global scheduling.

• System/subsystem ceilings. System/subsystem ceil-ings are dynamic parameters that change during run-time. The system/subsystem ceiling is equal to thecurrently locked highest external/internal resourceceiling in the system/subsystem.

Following the rules of SRP, a job Ji that is generated bya task τi can preempt the currently executing job Jk withina subsystem only if Ji has a priority higher than that ofjob Jk and, at the same time, the preemption level of τi isgreater than the current subsystem ceiling. A similar rea-soning is made for subsystems from a global schedulingpoint of view.

3.4 System modelIn this paper a periodic task model

τi(Ti, Ci, Di, {ci,j}) is considered, where Ti, Ci

and Di represent the task’s period, worst-case executiontime (WCET) and relative deadline, respectively, whereDi ≤ Ti, and {ci,j} is the set of WCETs within criticalsections associated with task τi. Each element ci,j in{ci,j} represents the WCET of the task τi inside a criticalsection of the global shared resource Rj .

Looking at a shared resource Rj , the resource holdingtime hj,i of a task τi is defined as the time given by thetask’s maximum execution time inside a critical sectionplus the interference (inside the critical section) of higherpriority tasks having preemption level greater than the in-ternal ceiling of the locked resource.

A subsystem Ss ∈ S, where S is the whole systemof subsystems, is characterized by a task set Ts and a setof internal resource ceilings RCs inherent from internaltasks using the globally shared resources. Each subsystemSs is assumed to have an EDF local scheduler, and thesubsystems are scheduled according to EDF on a globallevel. The collective resource requirements by each sub-system Ss is characterised by its interface (the subsys-tem interface) defined as (Ps, Qs, Hs), where Ps is thesubsystem’s period, Qs is it’s execution requirement bud-get, and Hs is the subsystem’s maximum resource holdingtime, i.e., Hs = max{hj,i|τi ∈ Ts accesses Rj}.

4 Schedulability analysis

This section presents the schedulability analysis of theHSF, starting with local schedulability analysis needed tocalculate subsystem interfaces, and finally, global schedu-lability analysis. The analysis presented assumes that SRPis used for synchronization on the local (within subsys-tems) level.

4.1 Local schedulability analysisLet dbfEDF(i, t) denote the demand bound function of

a task τi under EDF scheduling [4], i.e.,

dbfEDF(i, t) =⌊ t + Ti − Di

Ti

⌋· Ci. (2)

577

Page 4: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

The local schedulability condition under EDF schedulingis then (by combining the results of [5] and [24])

∀t > 0n∑

i=1

dbfEDF(i, t) + b(t) ≤ sbf(t), (3)

where b(t) is the blocking function [5] that represents thelongest blocking time during which a job Ji with Di ≤ tmay be blocked by a job Jk with Dk > t when both jobsaccess the same resource.

4.2 Subsystem interface calculationGiven a subsystem Ss, RCs, and Ps, let

calculateBudget(Ss, Ps,RCs) denote a function thatcalculates the smallest subsystem budget Qs that satisfiesEq. (3). Hence, Qs = calculateBudget(Ss, Ps,RCs).The function is similar to the one presented in [24],however, due to space limitations, its details are left outof this paper.

4.3 Global schedulability analysisFollowing Theorem 1 of [5], global schedulability

analysis under EDF scheduling is given using the systemload bound function LBF(t) as follows:

∀t > 0 LBF(t) = B(t) +∑

Ss∈SDBFs(t) ≤ t, (4)

where

DBFs(t) =⌊ t

Ps

⌋· Qs, (5)

and the system-level blocking function B(t) representsthe maximum blocking time during which a subsystem Ss

may be blocked by another subsystem Sk, where Ps ≤ tand Pk > t. B(t) is defined as

B(t) = max{Hk | Pk > t}. (6)

5 Overrun mechanisms

This section explains two overrun mechanisms that canbe used to handle budget expiry during a critical sec-tion in the HSF. Consider a global scheduler that sched-ules subsystems according to their periodic interfaces(Ps, Qs, Hs). The subsystem budget Qs is said to expireat the point when one or more internal (to the subsystem)tasks have executed a total of Qs time units within thesubsystem period Ps. Once the budget is expired, no newtask within the same subsystem can initiate its executionuntil the subsystem’s budget is replenished. This replen-ishment takes place in the beginning of each subsystemperiod, where the budget is replenished to a value of Qs.

Budget expiration may cause a problem if it happenswhile a job Ji of a subsystem Ss is executing within acritical section of a global shared resource Rj . If another

P P

Q θ QQ-θBD = 2P-2Q+θ

P

P P

Q θ QQ-θBD = 2P-2Q

P

θ

(a) basic overrun mechanism

(b) enhanced overrun mechanism

Figure 3. Basic and enhanced overrunmechanisms.

time

supp

ly

H

sbf(t)

sbf (t)*sbf (t)*

Figure 4. Comparing sbf(t) with sbf◦(t).

job Jk, belonging to another subsystem, is waiting for thesame resource Rj , this job must wait until Ss is replen-ished again so Ji can continue to execute and finally re-lease the lock on resource Rj . This waiting time exposedto Jk can be potentially very long, causing Jk to miss itsdeadline.

In this paper, we consider an overrun mechanism asfollows; when the budget of subsystem Ss expires and Ss

has a job Ji that is still locking a globally shared resource,job Ji continues its execution until it releases the lockedresource. The extra time that Ji needs to execute afterthe budget of Ss expires is denoted as overrun time θ. Themaximum θ occurs when Ji locks a resource that gives thelongest resource holding time just before the budget of Ss

expires. Here, we consider the payback overrun mecha-nism [9]. Whenever overrun happens, the subsystem Ss

pays back θ in its next execution instant, i.e., the subsys-tem budget Qs will be decreased by θ for the subsystem’sexecution instant following the overrun (note that only theinstant following the overrun is affected). Hereinafter, wecall this payback overrun mechanism basic overrun.

5.1 Basic overrunDavis et al. [9] presented schedulability analysis for

basic overrun, however, it is not suitable for open environ-ments [10] as it requires detailed information of all tasksin the system in order to calculate global schedulability.This section discusses how to extend the existing schedu-lability analysis for basic overrun, making it suitable foropen environments.

578

Page 5: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

5.1.1 Independent analysis with basic overrun

The supply bound function in [24] was developed un-der the assumption that the greatest blackout duration is2(P − Q). Basic overrun cannot employ this existingsupply bound function for schedulability analysis becauseits greatest Blackout Duration (BD) is 2(P − Q) + H(as shown in Figure 3a). Taking this into account, be-low is the presentation of a modified supply bound func-tion sbf◦Γ(t), that can be used with basic overrun (usingEq. (1)), as follows:

sbf◦Γ(t) = sbfΓ(t, BD◦), where BD◦ = 2(P −Q) + H.(7)

The existing schedulability conditions of Eq. (3) can thenbe extended by substituting sbfΓ(t) with sbf◦Γ(t).

5.1.2 Global analysis with basic overrun

We first discuss how to extend the demand bound functionof a subsystem with the basic overrun mechanism. Look-ing at basic overrun with payback in a subsystem Ss, themaximum contribution on DBFs(t) is Hs. When Ss over-runs with its maximum, which is Hs, the subsystem’s re-source demand within the subsystem period Ps will be in-creased to Qs +Hs. Following this, the budget of the nextperiod will be decreased to Qs − Hs due to the paybackmechanism. Then, suppose that the subsystem overrunsagain. Now, during the next subsystem period, the sub-system’s resource demand will be Qs − Hs + Hs = Qs.Here, one can easily see that the subsystem’s resource de-mand will be at most kQs + Hs during k subsystem pe-riods. Hence, the demand bound function DBF◦s(t) of asubsystem Ss with the basic overrun mechanism is

DBF◦s(t) = DBFs(t) + Os(t), (8)

where

Os(t) ={

Hs if t ≥ Ps,0 otherwise.

(9)

The schedulability condition of Eq. (4) can then be ex-tended by substituting DBFs(t) with DBF◦s(t).

5.2 Enhanced overrunAs seen in Section 5.1, the basic overrun mechanism

works with a modified supply bound function sbf◦(t) thatis less efficient in terms of CPU resource usage comparedwith the original sbf(t), as illustrated in Figure 4. Nowwe propose an enhanced overrun mechanism that makesit possible to use sbf(t) and overrun to improve the effi-ciency of CPU resource utilization.

The enhanced overrun mechanism is based on impos-ing an offset (delaying the budget replenishment of sub-system) equal to the amount of an overrun θs to the execu-tion instant that follows a subsystem overrun. As shownin Figure 3b, the execution of the subsystem will be de-layed by θs after a new period followed by overrun even

if that subsystem has the highest priority at that time. Bythis the maximum BD will be decreased to 2(P −Q) com-pared with basic overrun shown in Figure 3a and thereforeit is possible to use the same supply bound function pre-sented in section 3.2. One of the important features thatthe enhanced overrun mechanism provides is that it movesthe effect of overrun from the local to the global schedu-lability analysis, so the subsystem development will notdepend on if there is an overrun mechanism or not. Thisfeature is very important in an open environment. We canthen use the existing local EDF schedulability conditionof Eq. (3) without any modification.

5.2.1 Global analysis with enhanced overrun

The effect of overrun is now moved to global schedulabil-ity analysis in the enhanced overrun mechanism. Here, wepresent a demand bound function DBF∗s(t) of a subsystemSs that upper-bounds the demand requested by Ss underthe enhanced overrun mechanism. Now, DBF∗s(t) includesthe offset θs = Hs as follows:

DBF∗s(t) =⌊ t + Hs

Ps

⌋· Q∗

s + O∗s(t), (10)

where

O∗s(t) =

{Hs if t ≥ Ps − Hs,0 otherwise.

(11)

The schedulability condition of Eq. (4) can then be ex-tended by substituting DBFs(t) with DBF∗s(t).

6 Comparison between basic and enhancedoverrun mechanisms

In this section, we will compare the efficiency of thetwo overrun mechanisms. First, we will show the effectof using each of them locally, i.e., on a subsystem level.Then, we will show their effect globally, i.e., on a systemlevel.

6.1 Subsystem-level comparisonThe following lemma shows that the minimum re-

quired subsystem budget when using enhanced overrunwill be lower than or equal to the minimum required bud-get when using basic overrun.

Lemma 1 Assuming that the minimum required budget toschedule all tasks in a subsystem Ss using basic overrunis Q◦

s , and that the corresponding budget using enhancedoverrun is Q∗

s , then Q∗s ≤ Q◦

s .

Proof A subsystem Ss is exactly schedulable iff in ad-dition to Eq. (3),

∑ni dbfEDF(i, t) + b(t) = sbf(t) for

∃t s.t. minni Di ≤ t ≤ LCMSs + maxn

i Di (see the-orem 2.2 in [11]). This means that if the budget Qs

is the minimum required to guarantee the schedulabilityof tasks in Ss, then there is a set of times te at which

579

Page 6: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

∑ni dbfEDF(i, t) + b(t) = sbf(t). Without loss of gener-

ality, we assume that te includes one element. If we usesame subsystem budget Qs for both basic and enhancedoverrun then

sbf◦(t) = sbf(t − Hs) (12)

where sbf(t) is used with enhanced overrun and the shiftin time “−Hs” comes from the difference in BD whenusing enhanced and basic overrun. From Eq. (1) andEq. (12), we have two cases:

case 1: sbf◦(t) = sbf(t) for t ∈ [kPs − Qs + Hs, (k +1)Ps − 2Qs] where k is an integer number k > 1.

case 2: sbf◦(t) < sbf(t) for t out of the range in case 1.

If te ∈ [kPs − Qs + Hs, (k + 1)Ps − 2Qs] thensbf◦(te) = sbf(te). And then

∑ni dbfEDF(i, te) +

b(te) = sbf◦(te), which means that Qs may be enough toschedule all tasks in a subsystem Ss using basic overrun,so Q∗

s = Q◦s at time t = te. However, Eq. (3) must be

checked if it holds for all other times t, to be sure that thesubsystem Ss is still scheudlable.

If te is not in the range given for case 1, thensbf◦(te) < sbf(te). And then sbf◦(te) <∑n

i dbfEDF(i, te) + b(te) which means that the budgetQs will not satisfy the condition in Eq. (3) using basicoverrun and we should provide higher budget. In this caseQ∗

s < Q◦s . �

6.2 System-level comparisonAs shown in the previous section, the minimum re-

quired budget when using enhanced overrun is lower thanor equal to the minimum budget when using basic over-run. However, at system level, it is not easy to see whichof the two approaches that will require minimum overallsystem CPU resources.

Let us define system load as a quantitative measure torepresent the minimum amount of CPU allocations neces-sary to guarantee the schedulability of the system S. Then,we will investigate the impact of each overrun mechanismon the system load, respectively. The system load loadsys

is computed as follows:

loadsys = maxt

LBF(t)t

. (13)

Note that α = loadsys is the smallest fraction of theCPU that is required to schedule all the subsystems in thesystem S (satisfying Eq. (4)) assuming that the resourcesupply function (at system level) is αt.

Looking at Eq. (13), we can decrease loadsys by lower-ing LBF(t). Enhanced overrun will make LBF(t) lowercompared with basic overrun since LBF(t) depends onthe subsystem budget. However, because of the offsetimposed in the global scheduling when using enhancedoverrun, the resource demand should be provide earlier(see Eq. (10)). Comparing Eq. (8), which computes

DBF◦s(t) using basic overrun, and Eq. (10), which com-putes DBF∗s(t) using enhanced overrun, we can see thatthe DBF◦s(t) is changed when t = a × Ps for s = 1, .., mwhere m is the number of subsystems and a is an integernumber such that a > 0. While DBF∗s(t) is changed whent = a × Ps − Hs for s = 1, .., m. Note that loadsys willbe evaluated at times when the demand bound functionchanges, so it is not possible to decide if using enhancedoverrun will require less loadsys compared with basic over-run. However, in some special cases depending on the pa-rameters of each subsystem, we can decide which of thetwo overrun mechanisms that will produce lower loadsys.For a subsystem Ss, we have two cases:

1. If Q◦s/PS < Q∗

s/(Ps − Hs), then the loadsys us-ing enhanced overrun will be greater than or equalto the loadsys with basic overrun. The reason for thisis that max(DBF◦s(tba)/tba) < max(DBF∗s(ten)/ten)where tba = a × Ps is the time when DBF◦s(t) ischanged, and ten = a × Ps − Hs is the time whenDBF∗s(t) is changed.

2. Otherwise, it will depend on the parameters of theother subsystems to decide which of the two overrunmechanisms that will be better in terms of loadsys.

We will explain the previous two cases by the followingthree examples:

Example 1: For the first case (Q◦s/PS < Q∗

s/(Ps −Hs)), suppose that a system S consists of two subsystemsS1 with parameters P1 = 50, Q◦

1 = 10, Q∗1 = 10, H1 = 4

and S2 with parameters P2 = 150, Q◦2 = 15, Q∗

2 =14.9, H2 = 8. Then using enhanced overrun loadsys =0, 478 and using basic overrun loadsys = 0, 44. The ba-sic overrun is better than the enhanced overrun by about3.8%.

Example 2: For the second case (Q◦s/PS ≥

Q∗s/(Ps − Hs)), suppose that a system S con-

sists of two subsystems S1 with parametersP1 = 50, Q◦

1 = 10, Q∗1 = 9, H1 = 4 and S2 with

parameters P2 = 150, Q◦2 = 15, Q∗

2 = 12, H2 = 8. Thenusing enhanced overrun loadsys = 0, 456 and using basicoverrun loadsys = 0, 44. The basic overrun is better thanthe enhanced overrun by about 1.2%.

Example 3: For the second case (Q◦s/PS ≥ Q∗

s/(Ps −Hs)), suppose that a system S consists of two subsystemsS1 with parameters P1 = 20, Q◦

1 = 5, Q∗1 = 4, H1 =

2 and S2 with parameters P2 = 150, Q◦2 = 10, Q∗

2 =9, H2 = 2. Then using enhanced overrun loadsys = 0, 285and using basic overrun loadsys = 0, 3. The enhancedoverrun is better than the basic overrun by about 11.5%.

580

Page 7: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

7 Computing resource holding time

In this section we explain how to compute the resourceholding time hj,i. Using the periodic virtual processormodel, each subsystem Ss receives CPU resources withallocation time Qs every period Ps. During Qs, the CPUallocation is 100 % of the CPU capacity (see Figure 2where the slope in the supply curve during Q is one). Themechanism presented in Section 5 guarantees that lock-ing and releasing a critical section of a globally shared re-source Rj will happen within the allocated CPU resourceQs + θ. Then hj,i will include the execution time of thetask τi that locks Rj inside the critical section as well asthe interference from all tasks within the same subsystemthat can preempt the execution inside the critical section.The worst case scenario happens when all tasks that canpreempt the execution of the critical section will be re-leased just after task τi has entered the critical section ofresource Rj . Then, the hj,i is computed [14] using Wj(t)as follows:

Wj(t) = cxj+∑

τk∈U

(min

(⌈ t

Tk

⌉,⌊Di − Dk

Tk

⌋+1

))·Ck,

(14)where cxj = max{ci,j} is the maximum execution timeof task τi inside the critical section of the resource Rj andU is the set of tasks such that U = {τk|πk > rcj}.

The resource holding time hj,i is the smallest positivetime t∗ such that

Wj(t∗) = t∗. (15)

Note that we have not counted the preemption insidethe critical section from other subsystems when calculat-ing the resource holding time. However, we are taking theinterference from higher preemption level subsystems intoaccount in the global schedulability analysis. Looking atEq. (4), hj,i may act as a blocking to other subsystems.Moreover, it is also the extra capacity required to preventbudget expiry inside critical section. When hj,i is consid-ered as a blocking time for other subsystems, the effectof interference from higher preemption level subsystemsinside the critical section will be included in the globalschedulability of the blocked subsystem (in the summa-tion part of Eq. (4)). When hj,i is used to evaluate theoverrun, interference from other subsystems inside thecritical section will not be important, as the only impor-tant part here is that the locked resource should be releasedbefore the end of the period.

Eq. (14) can be simplified to evaluate hj,i as shownbelow:

hj,i = cxj +∑

τk∈U

Ck (16)

The difference between Eq. (16) and Eq. (14) is thatin Eq. (14) we assume that all tasks that can preempt in-side the critical section can execute only once (we remove

the min function from the summation of Eq. (14)). Thereason for why it is safe to assume only one execution ofeach preempting task inside the critical section is givenin the following lemma, showing that if a task executesmore than one time inside the critical section, the subsys-tem will become unschedulable.

Lemma 2 For a subsystem that uses an overrun mecha-nism to arbitrate access to global shared resource underthe periodic virtual processor model, each task that is al-lowed to preempt the execution of anther task currentlyinside the critical section of a globally shared resourcecan, in the worst case, only execute (cause interference)once.

Proof We prove this lemma by considering two cases:(1) Ps < Tm (where Tm = min(Ti) for all i = 1, ..n),

if the task having period Tm executes 2 or more times in-side the critical section, this means that the resource willbe locked during this period, i.e., hj,i > Tm then hj,i >PS , which in turn means that the CPU utilization requiredby the subsystem Ss will be Us = (Qs + hj,i)/Ps > 1.

(2) If Ps ≥ Tm , sbfΓ(t) should provide at least Cm

at time t = Tm to ensure the schedulability test in Eq. (3).Note that sbfΓ(t) = 0 during t ∈ [0, 2Ps − 2Qs] so,2Ps − 2Qs < Tm which means Qs > Ps − Tm/2.

If the task that has period Tm execute 2 times insidethe critical section then hj,i > Tm. Hence, Qs + hj,i >Ps + Tm/2 which means Us = (Qs + hj,i)/p > 1. �

From Lemma (2), we can conclude that if t∗ > Tm

then the required CPU utilization Us will be greater thanone. This means that, in turn, all tasks that can preempt theexecution of a critical section should do so maximum onetime in order to keep the utilization of a subsystem lessthan one. This proves the correctness of Eq. (16) which isbased on the assumption that all tasks can interfere onlyonce as a worst case while a task is in the critical sectionof the resource Rj . If the value of hj,i becomes greaterthan min(Tm, Ps) then we can conclude that the subsys-tem will not be schedulable and we do not have to con-tinue the calculation towards finding a exact value of hj,i.

8 Summary

In this paper we have considered a new overrun mecha-nism, for hierarchical scheduling frameworks, that can beused in the domain of open environments [7]. The maincontributions of this paper are twofold: (1) we have pre-sented analysis of when one overrun mechanism is betterthan the other, and (2) we have presented how to calcu-late resource holding times when using the periodic virtualprocessor model.

The results indicate that in the general case it is nottrivial to evaluate which overrun method that is better thanthe other, as their impact on the CPU utilization is highlydependent on global system parameters such as subsys-tem periods and budgets. However, for open systems, en-hanced overrun is generally better than basic overrun, as

581

Page 8: [IEEE Factory Automation (ETFA 2008) - Hamburg, Germany (2008.09.15-2008.09.18)] 2008 IEEE International Conference on Emerging Technologies and Factory Automation - Scheduling of

it moves the effect of overrun from the local to the globalschedulability analysis.

Future work includes the development of local andglobal schedulability analysis for Fixed Priority Schedul-ing (FPS), as the current results consider Earliest Dead-line First (EDF). Another interesting issue is to comparethe enhanced overrun mechanism with other synchroniza-tion mechanisms such as BWI [19], BROE server [13] andSIRAP [6].

References

[1] L. Abeni and G. Buttazzo. Integrating multimedia appli-cations in hard real-time systems. In Proceedings of the19th IEEE International Real-Time Systems Symposium(RTSS’98), pages 4–13, December 1998.

[2] L. Almeida and P. Pedreiras. Scheduling within tem-poral partitions: response-time analysis and server de-sign. In Proceedings of the 4th ACM International Con-ference on Embedded Software (EMSOFT’04), pages 95–103, September 2004.

[3] T. P. Baker. Stack-based scheduling of realtime processes.Real-Time Systems, 3(1):67–99, March 1991.

[4] S. Baruah, A. Mok, and L. Rosier. Preemptively schedul-ing hard-real-time sporadic tasks on one processor. In Pro-ceedings of the 11th IEEE International Real-Time Sys-tems Symposium (RTSS’90), pages 182–190, December1990.

[5] S. K. Baruah. Resource sharing in EDF-scheduled sys-tems: A closer look. In Proceedings of the 27th IEEEInternational Real-Time Systems Symposium (RTSS’06),pages 379–387, December 2006.

[6] M. Behnam, I. Shin, T. Nolte, and M. Nolin. SIRAP:A synchronization protocol for hierarchical resource shar-ing in real-time open systems. In Proceedings of the 7th

ACM and IEEE International Conference on EmbeddedSoftware (EMSOFT’07), pages 279–288, October 2007.

[7] M. Behnam, I. Shin, T. Nolte, and M. Nolin. An over-run method to support composition of semi-independentreal-time components. In Proceedings of the IEEE In-ternational Workshop on Component-Based Design ofResource-Constrained Systems (CORCS’08), July 2008.

[8] R. I. Davis and A. Burns. Hierarchical fixed priority pre-emptive scheduling. In Proceedings of the 26th IEEEInternational Real-Time Systems Symposium (RTSS’05),pages 389–398, December 2005.

[9] R. I. Davis and A. Burns. Resource sharing in hierarchi-cal fixed priority pre-emptive systems. In Proceedings ofthe 27th IEEE International Real-Time Systems Sympo-sium (RTSS’06), pages 257–270, December 2006.

[10] Z. Deng and J. W.-S. Liu. Scheduling real-time appli-cations in an open environment. In Proceedings of the18th IEEE International Real-Time Systems Symposium(RTSS’07), pages 308–319, December 1997.

[11] A. Easwaran, M. Anand, and I. Lee. Compositional analy-sis framework using EDP resource models. In Proceedingsof the 28th IEEE International Real-Time Systems Sympo-sium (RTSS’07), pages 129–138, December 2007.

[12] X. Feng and A. Mok. A model of hierarchical real-timevirtual resources. In Proceedings of the 23rd IEEE Inter-national Real-Time Systems Symposium (RTSS’02), pages26–35, December 2002.

[13] N. Fisher, M. Bertogna, and S. Baruah. The design ofan EDF-scheduled resource-sharing open environment. InProceedings of the 28th IEEE International Real-TimeSystems Symposium (RTSS’07), pages 83–92, December2007.

[14] N. Fisher, M. Bertogna, and S. Baruah. Resource-lockingdurations in EDF-scheduled systems. In Proceedings ofthe 13th IEEE Real Time and Embedded Technology andApplications Symposium (RTAS’07), pages 91–100, April2007.

[15] T.-W. Kuo and C.-H. Li. A fixed-priority-driven open en-vironment for real-time applications. In Proceedings ofthe 20th IEEE Real-Time Systems Symposium (RTSS’99),pages 256–267, December 1999.

[16] G. Lipari and S. Baruah. Efficient scheduling of real-timemulti-task applications in dynamic systems. In Proceed-ings of the 6th IEEE Real-Time Technology and Applica-tions Symposium (RTAS’00), pages 166–175, May 2000.

[17] G. Lipari and E. Bini. Resource partitioning among real-time applications. In Proceedings of the 15th Euromi-cro Conference on Real-Time Systems (ECRTS’03), pages151–158, July 2003.

[18] G. Lipari, J. Carpenter, and S. Baruah. A frame-work for achieving inter-application isolation in multipro-grammed hard-real-time environments. In Proceedings ofthe 21st IEEE International Real-Time Systems Sympo-sium (RTSS’00), pages 217–226, December 2000.

[19] G. Lipari, G. Lamastra, and L. Abeni. Task synchro-nization in reservation-based real-time systems. IEEETransactions on Computers, 53(12):1591–1601, Decem-ber 2004.

[20] A. Mok, X. Feng, and D. Chen. Resource partition forreal-time systems. In Proceedings of the 7th IEEE Real-Time Technology and Applications Symposium (RTAS’01),pages 75–84, May 2001.

[21] R. Rajkumar, L. Sha, and J. P. Lehoczky. Real-time syn-chronization protocols for multiprocessors. In Proceed-ings of the 9th IEEE International Real-Time SystemsSymposium (RTSS’88), pages 259–269, December 1988.

[22] S. Saewong, R. R. Rajkumar, J. P. Lehoczky, and M. H.Klein. Analysis of hierarhical fixed-priority scheduling.In Proceedings of the 14th Euromicro Conference on Real-Time Systems (ECRTS’02), pages 173-181, June 2002.

[23] L. Sha, J. P. Lehoczky, and R. Rajkumar. Task schedulingin distributed real-time systems. In Proceedings of the In-ternational Conference on Industrial Electronics, Control,and Instrumentation, pages 909–916, November 1987.

[24] I. Shin and I. Lee. Periodic resource model for com-positional real-time guarantees. In Proceedings of the24th IEEE International Real-Time Systems Symposium(RTSS’03), pages 2–13, December 2003.

[25] I. Shin and I. Lee. Compositional real-time schedulingframework. In Proceedings of the 25th IEEE InternationalReal-Time Systems Symposium (RTSS’04), pages 57–67,December 2004.

[26] I. Shin and I. Lee. Compositional real-time schedul-ing framework with periodic model. ACM Transactionson Embedded Computing Systems, 7(3):(30)1–39, April2008.

582