16
ELSEVIER Computer Communications 21 (1998) 3344349 computer communitxtions An architecture for adaptive QoS and its application to multimedia systems design M. Ott*, G. Michelitsch, D. Reininger, G. Welling C&C Research Laboratories, NEC USA, Inc. 4 independence Wuy, Princeton, NJ 08540, USA Abstract We describe a prototype implementation of a distributed multimedia system that generalizes the concept of QoS to all layers of its software architecture. Each layer deals with QoS at its appropriate level of abstraction using a generic API for communicating QoS parameters and values to layers above and below. The aggregation of these parameters and values is called a setvice contract. This architecture provides a hierarchical framework to design adaptive multimedia systems. Furthermore, the API allows for reporting of contract violations as well as dynamic renegotiation of the contract terms. A proof-of-concept multimedia system was built to evaluate the proposed architecture. Key components of this system are: a graphical user interface that dynamically requests the quality expected by the user to lower level components, a dynamic network service that efficiently matches network resources to user requirements and a processor scheduler which schedules tasks according to their execution requirements. Our experience with this system showed that the proposed architecture is an efficient framework for building adaptive multimedia systems. 0 1998 Elsevier Science B.V. Keywords: multimedia distributed systems; quality of service; variable bit-rate video; graphical user interface 1. Introduction The scalability of software systems within the bounds of available resources entails the notion of quality of service (QoS). Current systems address this notion either by hiding variations in available resources low down in the software hierarchy, or by implementing a restrictive resource alloca- tion model based on a hard call admission policy that either fully grants or rejects the resources requested. However, we believe the usage patterns of current systems require QoS support at all levels of the software hierarchy. Furthermore, a soft resource allocation model is essential for software systems aimed at the computing and communication envir- onments of the future. Consider the scenario of a financial analyst who must follow an inherently multi-modal work style. He needs to monitor and interact with different sources of information on her portable hand-held device. Such information may include real-time financial data, on-line news feeds, and a variety of broadcast audio and video services. The portability of our financial analyst’s device implies limited screen real estate, which must be organized to reflect * Corresponding author. 0140-3664/98/$19.00 0 1998 Elsevier Science B.V. All rights reserved PII SOl40-3664(97)00167-9 his preferences. During a typical session, an important story from a news feed showing in a small window might catch the attention of our analyst. He focuses on this report for a short while, ignoring everything else on the screen. He then searches for related background information, and finally examines the effect of the story on the financial market. During the session, the analyst has moved his attention from one source of information to another, demanding dif- ferent levels of detail in the data being presented. Clearly, the variation in the required QoS can be exploited by the user interface for better utilization of screen real estate. Imagine our financial analyst commuting to his office by train. While riding his train, he may have different network connectivity from what he has at his office, Furthermore, there may be frequent hand-off operations during the course of his commute, with variations in network services. It is difficult to reserve network resources in such a scenario with the dual objective of maintaining QoS for a particular user, and achieving high network utilization. The network can balance these conflicting requirements by providing QoS support with soft guarantees within which adaptive applica- tions can continue to operate. Some of the information sources our financial analyst is monitoring may have multimedia content. The flexibility of

An architecture for adaptive QoS and its application to multimedia systems design

Embed Size (px)

Citation preview

ELSEVIER Computer Communications 21 (1998) 3344349

computer communitxtions

An architecture for adaptive QoS and its application to multimedia systems design

M. Ott*, G. Michelitsch, D. Reininger, G. Welling

C&C Research Laboratories, NEC USA, Inc. 4 independence Wuy, Princeton, NJ 08540, USA

Abstract

We describe a prototype implementation of a distributed multimedia system that generalizes the concept of QoS to all layers of its software

architecture. Each layer deals with QoS at its appropriate level of abstraction using a generic API for communicating QoS parameters and values to layers above and below. The aggregation of these parameters and values is called a setvice contract. This architecture provides a hierarchical framework to design adaptive multimedia systems. Furthermore, the API allows for reporting of contract violations as well as

dynamic renegotiation of the contract terms. A proof-of-concept multimedia system was built to evaluate the proposed architecture. Key components of this system are: a graphical

user interface that dynamically requests the quality expected by the user to lower level components, a dynamic network service that efficiently matches network resources to user requirements and a processor scheduler which schedules tasks according to their execution requirements. Our experience with this system showed that the proposed architecture is an efficient framework for building adaptive multimedia systems. 0 1998 Elsevier Science B.V.

Keywords: multimedia distributed systems; quality of service; variable bit-rate video; graphical user interface

1. Introduction

The scalability of software systems within the bounds of available resources entails the notion of quality of service

(QoS). Current systems address this notion either by hiding variations in available resources low down in the software

hierarchy, or by implementing a restrictive resource alloca- tion model based on a hard call admission policy that either

fully grants or rejects the resources requested. However, we believe the usage patterns of current systems require QoS support at all levels of the software hierarchy. Furthermore,

a soft resource allocation model is essential for software systems aimed at the computing and communication envir- onments of the future.

Consider the scenario of a financial analyst who must follow an inherently multi-modal work style. He needs to monitor and interact with different sources of information on her portable hand-held device. Such information may include real-time financial data, on-line news feeds, and a variety of broadcast audio and video services.

The portability of our financial analyst’s device implies limited screen real estate, which must be organized to reflect

* Corresponding author.

0140-3664/98/$19.00 0 1998 Elsevier Science B.V. All rights reserved

PII SOl40-3664(97)00167-9

his preferences. During a typical session, an important story from a news feed showing in a small window might catch

the attention of our analyst. He focuses on this report for a short while, ignoring everything else on the screen. He then

searches for related background information, and finally examines the effect of the story on the financial market.

During the session, the analyst has moved his attention

from one source of information to another, demanding dif- ferent levels of detail in the data being presented. Clearly,

the variation in the required QoS can be exploited by the user interface for better utilization of screen real estate.

Imagine our financial analyst commuting to his office by

train. While riding his train, he may have different network connectivity from what he has at his office, Furthermore, there may be frequent hand-off operations during the course of his commute, with variations in network services. It is difficult to reserve network resources in such a scenario with the dual objective of maintaining QoS for a particular user, and achieving high network utilization. The network can balance these conflicting requirements by providing QoS support with soft guarantees within which adaptive applica- tions can continue to operate.

Some of the information sources our financial analyst is monitoring may have multimedia content. The flexibility of

M. Ott et al./Computer Communications 21 (1998) 334-349 335

software in terms of customizing and configuration

encourages software processing of such multimedia content.

Since multimedia tasks are periodic in nature, it is feasible to process multiple media streams simultaneously. On the

one hand, media processing is associated with real-time deadlines, while on the other, it can be computationally

expensive. In order to satisfy both these conflicting require-

ments under resource shortage, it is imperative for processor

allocation to provide the notion of QoS. This allows the perceived performance of a multimedia task to be altered,

either in response to user preference, or because its proces-

sing requirement cannot be met.

In order to build systems which can span the varied

requirements of our financial analyst, we propose a frame-

work within which QoS aware media applications can easily be constructed. This framework is generic enough to express

the QoS requirements of such diverse resources as the dis-

play, the network and the processor. Furthermore, the framework allows the mapping of QoS specifications at a

particular level of the software hierarchy to QoS specifica-

tions at a lower level. This naturally enables the construc- tion of a hierarchy of services, with increasingly abstract

notions of QoS at higher levels of the hierarchy.

The rest of the paper is organized as follows. In Section 2

we describe our concept of QoS contracts through which a software module specifies its resource requirements to a

service provider. We describe the application of our concept to the user interface, network, and processor allocation in Sections 3-5, respectively. In Section 6 we present an archi-

tectural overview of our system. We give details about our implementation in Section 7, and discuss our experience with it in Section 8. Finally, we present our conclusions in

Section 9.

2. QoS architecture

Most architectures for quality of service (QoS) provide a

QoS aware API to applications. A comprehensive review of such QoS architectures can be found in Ref. [ 11. These APIs either add QoS parameters to standard system calls, or raise

system abstractions to a higher level, filling the gap with what is often referred to as middle-ware. Although traditional QoS

support has been restricted to the network domain, the same principles of modularity and abstractions found in network and system design are routinely used in applications as well.

It therefore seems natural to define an architecture which

allows the introduction of QoS at any level: from the CPU

and network resources, to the user’s perception.

In addition, the concept of competing applications often runs contrary to the way they are used. For instance, all applications on a terminal serve the same user, who may frequently change the relative importance of applications. In a resource limited environment such usage patterns can aid in allocating a resource where it is needed most. Resources can be shifted from a less important application to one

Fig. I. Consumer-provider model.

which currently has the user’s focus. To support such beha-

vior a mechanism is needed to shift resources between

applications. It is obviously advantageous if the applications will actively cooperate in this process.

In large systems, interactions and dependencies between

the various components can often be described in terms of

services they provide to each other. It usually becomes unnecessary to know the inner workings of a component

if its functionality can be fully described. In fact, to a con- sumer of a service, the cost and quality of the service becomes more important than the internals of a component that provides it. While this allows a provider to choose

different strategies and methods to implement a specific service, it also gives the consumer a choice between multi- ple providers of the same service, with possibly differing

qualities and cost. In order to realize the objectives described, we define a

simple model as shown in Fig. 1. A consumer and provider interact using a generic API which can be recursively

applied to all levels of the software hierarchy. The consumer

desires a service which is specified by a service contract. A

provider for the service is located by some means, such as a third-party broker, and a binding is established. After the binding has been made, either party to the contract can initiate a renegotiation of the contract terms at any time. Changes in QoS can therefore be initiated by the user

according to his preferences (top down), or by the most

primitive resource providers (bottom-up) in order to change resource distribution. In case of a failed renegotiation, the consumer can also bind to a new service provider.

2. I. Service contract

Part of the generic API is the abstract data type of a Service Contract. It holds a set of QosParameters, the specifics of which completely describe the service. A parameter is a tuple of name, type and value (Table 1). These terms can either be used as requirements that a consumer imposes on a provider, or as a measure of compliance of a service with its requirements.

336 M. Ott et al./Computer Communications 21 (I 998) 334-349

Table I (Fig. 1) consists of three primitives necessary for service QoS parameter examples contract negotiations:

Name Type Value

CurrentFrameRate

FrameRateRange

cost

Priority

IntegerValue

IntegerRange

IntegerTargetRange

OrderedSet

30

[I . ..30]

[IO0 (150) 2001

Low, medium, high

A service provider can itself enlist the service of other

software components creating a hierarchy of services. There

can be a distinct service contract at each level using the same generic interface for contract negotiation. As shown

in Fig. 2, it will be very common for a consumer to request the service of multiple providers. In fact, most services will

simply produce a smart mapping from their provider con-

tract to multiple consumer subcontracts. An analogous example would be a tour operator who is packaging sub- contracts with various airlines, rental car agencies and hotel

operators into a single holiday package.

In multimedia systems a video service may subcontract

with a video server, a transport service and a display station. These in turn may recursively subcontract with lower level

services which ultimately terminate in contracts for physical resources, such as CPU cycles, buffer space and network capa- city.

Also shown in Fig. 2 is our notion of control and manage- ment. It may be possible to decompose a given contract into

several different sets of subcontracts. It is the function of the management component to evaluate these options and choose the most appropriate one. The control component

then attempts to satisfy the contract by fine-tuning the set

of subcontracts chosen. In case Control fails to do so Mun- agement is notified which can now attempt to maintain con-

tract compliance by trying a different set of subcontracts. If

this fails the consumer of this service is notified and pre- sented with a feasible contract.

2.2. Generic integace for contract negotiation

The generic API between any consumer and provider

Fig. 2. Hierarchy.

l Request-the consumer presents a contract to the pro-

vider.

l Notify-the provider presents the consumer with a new

contract if the current one cannot be maintained.

l Status-optionally, the consumer can query the provi-

der about contract compliance.

We believe that this simple interface is sufficient to describe the interaction between any service provider and consumer. To

verify this hypothesis we built a proof-of-concept multimedia system in which all interaction is structured around this generic

interface. The components of this system are:

A graphical user interface based on direct manipulation

techniques and three-dimensional display. This user interface can provide both implicit cues and explicit

requests to lower level components about the quality

expected by the end-user. A dynamic network service that efficiently matches net-

work resources to user requirements. This service pro- vides feedback on resource availability to the upper

layers of the system. A processor scheduler which schedules tasks according

to their execution requirements. It supports dynamic changes in execution profiles and initiates renegotiation based on processor utilization.We describe the design of

these components in the next three sections.

3. QoS aware user interface

Although user interface design issues are important for

any interactive system, there are few results reported in the research community about user interface issues in relation

with QoS. Work such as Refs. [2,3] discusses the relation between user perception and video play-out quality. The survey in Ref. [4] describes an approach where users select QoS parameters on the basis of example displays. We, on

the other hand, expose the notion of QoS to the user of interactive multimedia systems through new user interface concepts and interaction techniques. For that purpose we

use CockpitView IS], an experimental user interface frame- work which offers the following key features:

Support for large number of objects on the screen that can be viewed and manipulated concurrently through the use of three-dimensional techniques. This surpasses cur- rent windowing user interfaces, which have to rely on icons to reduce the consumption of screen real estate. Use of user interaction techniques that work equally well on an office computer with a mouse attached, and on a portable device with a touch panel. Provision for nonexpert users to work with the system out of the box without having to read an extensive manual first.

M. Ott et al./Computer Communications 21 (1998) 3346349 337

Fig. 3 shows a snapshot of the user interface in action. To realistic rendering of three-dimensional objects, our goal

the left is a compound document (NEC Times) representing is to optimize the use of limited screen real estate, while

a service provider. Several of the embedded video clips have providing users with the context they need 16,7].

been dragged off the document and placed on the data land- scape for viewing. The one at the bottom of the screen has

both the Set-viceMeter and the video control tool attached to

allow interaction with the user. The other video clips further

back show their content at a smaller scale factor according

to their placement on the data landscape.

3.1. Three-dimensional data landscape

Based on the fact that humans are generally very good at

remembering objects by their location, we chose a spatial

metaphor for organizing information entities in a virtual

landscape. Information is encapsulated in objects which

are displayed in perspective on the surface of that landscape.

Recent advances in low-cost three-dimensional graphics

hardware make this approach feasible. However, in contrast

to the video game industry, where the emphasis is on

Howeier, we do not believe that ordinary users will want to use special equipment for three-dimensional input and

output. We want to be able to manipulate objects directly on a flat screen with either a pen or our fingers. In order to do

so we restrict the degrees of freedom an object has in our

information landscape. By grabbing the projected image of

an object and dragging it on the screen, we move the object on the data landscape along a path on the surface of that

landscape as shown in Fig. 4.

The user looks at the landscape as if through the window

of a cockpit. He can reach out of the cockpit, grab an object

on the landscape, and move it around. When he drags an

object closer to the cockpit window, it will expand revealing more of its content. If pushed back towards the horizon the

object will shrink. However, all objects will keep their

full functionality no matter at what scale factor they are

displayed.

Fig. 3. Interface screen shot.

338 M. Ott et al./Computer Communications 21 (1998) 3346349

Cockpit window Data landscape

Movement J LLZ=--- Projected data object

of projected object

Fig. 4. Interaction with object in three-dimensional landscape.

3.2. Active tools

Every object on the screen is an active object that reacts to user-generated events as well as to other objects. When

the user drops an object onto another one and both objects

are compatible, a compound object will be created. If one of these objects is what we call an Active Tool, the attachment will also cause the tool object to perform an operation on the

other object. This is a less abstract, more real-world like way of manip-

ulating objects [8]. The active tool concept also allows for

both an object-verb approach, where the user first identifies the target of an operation and then the command, as well as

for a verb-object approach, where the user starts with selecting a command first and then specifies the target

object. The familiar set of pickup, drag and drop operations is all

the user has to master in order to be able to operate the entire user interface. The possibility to group, modify and there- fore customize commands simply by assembling existing

active tools enables users to perform tasks that previously

required some form of programming, or the use of a dedicated user interface builder.

3.3. Servicemeter: service contracts at the user integace

level

We designed a generic object which allows users to spe- cify and view QoS contracts in a consistent way across

different types of media objects. The ServiceMeter changes its appearance and the semantics of controls based on the type of object it is attached to. For that purpose it uses the common convention of active tools and the inherent

mechanisms for building compound objects defined in the CockpitView framework.

The value range of each parameter or parameter pair in the contract is projected to a different side of a cube. At the highest level of abstraction we start with a single parameter which expresses quality in terms from low to high. At the next level this parameter is mapped to a different set of parameters, in the case of video this could be frame rate and detail, where detail then again maps to resolution and quantization.

Fig. 5 shows the cube of the ServiceMeter in three

different stages, first with the control portion hidden, then revealing the slider for the quality parameter, and finally

showing the two-dimensional slider for controlling detail

over frame rate (other parameters of the contract, including

those pertaining to audio, are mapped to different sides of

the cube not shown in this figure). The user moves from one

level of abstraction to another one by clicking on a particu- lar area on the cube which causes it to rotate to the next side.

The right portion of the ServiceMeter in Fig. 5 shows another aspect of the service contract. It displays cost, band-

width and an alert indicator driven by feedback from the

service provider. Bandwidth and cost scales are projected

onto different sides of a tetrahedron. Similar to the QoS

control cube the tetrahedron rotates from one side showing

bandwidth to the other side showing cost with the click of a

mouse button.

This mechanism of mapping one parameter to a set of less abstract parameters recursively is a direct application of the service contract concept described in the previous section. The user interface allows the visualization of the service

hierarchy through progressively revealing levels of detail

in the parameters making up these contracts.

3.4. Explicit vs implicit QoS control

While the ServiceMeter gives the user explicit control over the service contract for each media object, information derived from the placement of objects in the data landscape

can also be used to formulate a service contract. Objects placed close to the horizon can reduce their requirements

for certain parameters since they are displayed at a smaller scale. If, for example, such an object displays video, because of the smaller size both the resolution and to a lesser

degree the frame rate can be reduced without significantly

changing the user’s perception of the video played.

The strategy we use in our system is to employ implicit QoS control for all objects with no ServiceMeter attached. As soon as the user attaches the meter tool to an object, the

ServiceMeter updates its parameter settings and displays

reflecting the current setting maintained by that media object. As long as the ServiceMeter is attached the QoS contract as shown on that tool is maintained regardless of

the placement of the media object, effectively overriding the implicit mechanism.

Thus we can leverage the cost saving potential (assuming that cost is related to bandwidth requirements from the net- work) of our QoS approach without even having to intro- duce the concept to end-users by using the implicit strategy. However, the advanced or simply curious user can easily override this behavior by using the service meter tool.

4. Network bandwidth allocation with dynamic QoS support

As explained in Section 2, service contracts can be

M. Ott et d/Computer Communications 21 (199X) 334-349

Fig. 5. ServiceMeter showing QoS parameters at different levels of abstraction

renegotiated during a session. This allows efficient manage-

ment of the varying resource requirements of multimedia

applications. In this section, we describe the dynamic

renegotiation of the service contract with the network. In an efficient multimedia system, the network service

contract has to adapt to the dynamic bandwidth require-

ments of interactive variable bit-rate applications. For example, with the user interface described in

Section 3, users can interactively move the video display

window in the three-dimensional landscape, implicitly requesting a larger resolution image when the video is dis-

played at the front and a lower resolution when it is placed at the back. This results in significantly different network

bandwidth requirements. Even when the video window is at a fixed position in the landscape, bandwidth renegotia-

tions are still required to match the variable bit-rate (VBR)

of compressed video at uniform quality [9]. Furthermore, when the network is congested and/or the application migrates to a mobile personal terminal with wireless con- nectivity, the allocated bandwidth should change to match the network’s limitations [lo].

Broad-band networks with QoS support, like ATM, accept connections based on a traffic contract. This contract

is negotiated with each traffic source when a connection is being established in order to allocate the appropriate

resources to the connection. The traffic contract contains a

traffic descriptor and network-level QoS parameters. In ATM networks, the traffic descriptor is called usage para-

meter control (UPC) [ 111. The UPC consists of peak-rate, burst size and sustained rate. The network-level QoS para- meters include the cell delay variation, cell transfer delay and cell loss ratio. The network’s connection admission

control (CAC) uses the traffic contract at the connection establishment phase to either accept or reject the connec-

tions’ request. If the connection can be established without adversely affecting the QoS of existing connections, then it

will be accepted; otherwise it will be rejected [12]. However, the use of a fixed traffic contract does not allow

dynamic QoS support and high channel utilization since

VBR multimedia traffic varies significantly over different time-scales. This becomes an even more critical issue in the mobile wireless multimedia scenario where it is difficult to commit fixed resources to VBR traffic for the duration of a connection while maintaining high network utilization. Thus, for efficient QoS support of VBR multimedia applica- tions we allow connections to renegotiate their UPC

340 M. Ott et al./Computer Communications 21 (1998) 334-349

parameters in the traffic contract. We believe that by extend- ing the traditional VBR class to support UPC renegotiation,

the practical trade-off between QoS and network utilization can be balanced. We call this extended VBR service VBR+

[ 131. Fig. 6 shows the service contracts and parameters

renegotiated among different system components when

using the VBR+ service.

Movie name, Vi&o detail, Framerate, Viewing mode

When a user changes QoS parameters, the media service

(MS) receives a new service contract and derives appropriate subcontracts for the video source (VS) and traf-

fic service (TS) using rule-based mappings. The contract

with the VS is modified by mapping the video detail to a

new video resolution and compression quantizer through a

table look-up. The table is empirically obtained and dependent on the video compression used. This new service

contract may require a different bit-rate depending on

the selected resolution, frame-rate and compression quantizer. An appropriate minimum bit-rate required is esti-

mated by the MS and it is specified in the TS contract. Then,

the TS estimates and renegotiates the dynamic UPC

required to maintain the desired quality. The desired para- meters are specified in the renegotiated network service contract.

Movie name, Quantizer, I Resolution, Frame-rate

1 Rate, Delay

- loI-,. Fig. 6. Service contracts and parameters for VBR+.

The nonstationary nature of the multimedia traffic also results in renegotiation of the network service contract.

Video is in general the largest component on multimedia traffic and its bandwidth requirements vary significantly depending on scene activity and compression scheme.

This type of renegotiations are called source-initiated rene- gotiations. The TS handles source-initiated renegotiations

by monitoring the level of the buffer allocated for the con- nection at the server. TS specifies water-mark levels on its

service contract with the buffer and when these water-marks

are crossed the TS renegotiates a new UPC with the net- work. The VBR+ service also allows the network to reduce the allocated traffic parameters in the network service con- tract if congestion occurs.

quality depending on the application. While sof applications (such as teleconferencing or multimedia-on-

demand browsing) can tolerate relatively large reductions in bit-rate, hard applications (such as video-on-demand, medical applications) cannot tolerate much bit-rate scaling

without significantly degrading the application-level QoS.

This nonlinear response can be represented in an applica- tion-dependent satisfaction profile [ 151.

By using the applications’ satisfaction profile, the net-

work can implement a flexible connection admission and bandwidth allocation mechanism to provide sofr-QoS.

When congestion occurs, bandwidth is reallocated, distri-

buting the available capacity using the individual satisfac- tion profile of contending connections [ 141.

5. Adaptive scheduling with task cooperation

While renegotiations are in progress the MS prevents the

buffer from overflowing by adjusting the VS contract para- meters. The algorithm used to compute appropriate UPC

parameters for a given target video quality and its perfor- mance evaluation is presented in Ref. [ 131.

For VBRf connections, the traditional statistical band- width allocation model is modified so that no long-term

traffic model has to be assumed at connection set-up. Since a multimedia traffic profile is not known at connection set-up, conservative models are generally assumed leading to under-utilization. Higher utilization is possible if instead of specifying QoS in statistical terms we use a higher-level description of QoS, called soft-QoS, in the network service contract [ 141.

The same concept of negotiated QoS, which was success- fully applied to network bandwidth allocation, can also be

used for scheduling tasks in a multimedia system. We will show how this concept naturally fits into the operating sys- tem domain.

Multimedia processing can be considered a sequence of

subtasks which must be completed within deadlines. These

deadlines are often separated by a constant time interval which maintains the end-users perception of continuity. However, the tolerance of human perception to a small degree of variation softens subtask deadlines.

The need for soft-QoS stems from the objective of balan- cing network utilization and application-level QoS in dis- tributed multimedia systems. Most multimedia applications exhibit a nonlinear bit-rate to quality response. For example, video bit-rate scaling has a different impact on perceptual

Traditional approaches to scheduling do not address these requirements appropriately. Most schedulers in commonly used operating systems today are priority based. Real-time behavior can be achieved by having the operating system pre-empt a task whenever a higher priority task is ready to run. This can be an inefficient way of scheduling periodic tasks as has been shown in [28]. In addition, we believe that

hf. Ott et nl./Cotnpurrr Communications 21 (1998) 334-349 341

a priority value is not the right abstraction for a multimedia

task to negotiate service with a scheduler.

Hard real-time schedulers guarantee deadlines, but do not

provide the flexibility to support a changing real-time schedule for interactive use. The newer proportional share

schedulers [ 16,171 guarantee proportional allocation of pro-

cessor resources, but do not guarantee deadlines. Other researchers agree on the unique needs of media

systems. Most of the solutions attempt to satisfy the sche- duling requirements of all classes of applications [ 18-201,

with no active participation of tasks in scheduling decisions.

Although no architectural solution is presented in Ref. [21],

the necessity of a feedback system is acknowledged,

wherein a media task cooperatively adapts its processor

requirement. Scheduling in the Rialto kernel [22] and in

the SMART scheduler from Stanford [23] emphasize lim-

ited task participation. Both provide a processor reservation

API, but neither considers the unique characteristics of

media tasks in their solution. The flexibility of media tasks is also not taken advantage of in adaptive rate con- trolled schedulers [24], although feedback for task adapta- tion is provided.

5. I. Architecture for cooperative task scheduling

Multimedia applications according to our architecture are

composed of computing entities called clients, and an executive. A client registers itself with the executive and

provides the executive with a profile about its execution characteristics in the form of a contract such as expected

duration of an invocation and the required frequency of service (Fig. 7). The executive schedules each client for execution according to its profile, trying to meet every

deadline. If the load level of the CPU or contention among clients

prevents the executive from reaching this goal, it will send

feedback to delayed clients in order to give them a chance to adapt to the situation. This leads to a cooperative, adaptive

scheduling algorithm where clients demand a certain quality of service, but also reduce those requirements when

necessary. We are therefore able to define task-scheduler interaction in terms of service contracts like we did

for the other components in our QoS aware multimedia

system. We can further extend this architecture by allowing the

executive to preempt a client that has not yielded control in

a timely fashion. This extends the range of operation in

which the system behaves like a hard real-time system

and at the same time ensures that a malfunctioning client

cannot bring down the entire system. This cooperative approach to scheduling plays into the

strength of modern hardware architectures that make heavy use of deep pipelining and multiple execution units within the central processing unit. Interrupts and the resulting con- text switch become increasingly expensive in such high performance architectures.

Fig. 7. Model for adaptive and cooperative task scheduling

6. System architecture

The introduction of QoS at all levels of the system archi-

tecture should aid resource allocation without impeding efficient data transfer and processing, which is essential in multimedia systems. Towards this end, we can identify two

distinct domains in multimedia systems: the flow domain in

which performance is the prime consideration, and the control domain where resource allocation and control is

important. The hierarchical structuring of services as described in Section 2 is our architecture for organizing

entities in the control domain.

The service hierarchy in a multimedia system can be visualized as a tree structure, with higher order services higher up in the tree. The leaves in such a service hierarchy

are associated with primitive resources like the CPU, net-

work and buffers, and correspond to the flow domain described earlier. Efficient movement of data among these

leaf entities is necessary. Since performance is the primary

concern, it is clearly necessary to do this without involving higher levels of the hierarchy. We arrive at an architecture as shown in Fig, 8, where media data flows horizontally

along the leaves of the service hierarchy, while control information associated with resource allocation flows verti-

cally between various service entities. Another important consideration when building a system is

to allow flexible customization, while providing generic com- ponents. In order to address this issue, we chose a split design,

where generic components are implemented as C++ modules which can be created and manipulated from an engine for an

interpreted language. This provides flexibility and allows

rapid prototyping of applications. Typically, all the compo- nents of the flow domain are implemented as C++ modules, while most of the entities in the control domain are imple- mented in the interpreted language. However, this distinction is loose, and is driven primarily by the tradeoff between flex- ibility and efficiency. Consequently, a generic control entity may be implemented as a C++ module for efficiency reasons.

342 M. Ott et al./Computer Communications 21 (1998) 334-349

The flow domain in our system comprises a set of media processing modules which can easily be connected through

wires into distributed media nets [25]. These modules can be

categorized into three groups:

Sources-modules creating media streams, such as

media databases, or capturing devices, which include cameras and microphones.

Sinks-modules terminating media streams, such as

displays and audio devices. Filters-modules which process media streams. They

either alter the stream, such as compressing it, or they

extract information from the stream, such as recognizing

a face. A novel aspect of our system is that modules are

completely passive. The task of moving data between

connected modules, or scheduling their execution is facilitated by a wire object. The wire has extensive func- tionality to support rate-based execution and interflow

synchronization. This separation of functionality allows the implementor of a new module to concentrate on the

implementation of operations he wants to perform on the

data flowing through.

The control domain consists of a global object space pro- vided by language engines on all physical devices which are

connected through the network. The global object space allows the same messaging mechanism between objects

independent of the engine they currently reside in. Conse- quently, the location of an object does not affect the correct- ness of an application. Placement of objects becomes mainly an issue of maximizing resource utilization. As this often depends on the run-time environment we believe

that it should be cleanly separated from the implementation

of the application. We are currently experimenting with

various resource managers which observe applications and, transparently, migrate objects between engines to optimize resource consumption.

The link between the flow and control domains is accom-

plished by resource objects in the control domain which

represent the control interface of modules in the flow

domain. Messages sent to such an object are channeled

through the language engine to the proper module. A code

generator automates the creation of resource objects for

various language engines. In the simple video-on-demand system shown in Fig. 8,

the lower part illustrates media modules in the flow domain. The Track module reads a video flow from the server disks.

It is connected via a wire (triangular shaped object) to a

Flow Cd module which provides the necessary flow control

for the stream. This first wire also ensures a constant frame- rate in conjunction with the scheduler (not shown). The two

Port modules provide the transparent flow of data from the

server to the terminal. There, the remaining modules decode

the sequence of compressed video frames and display them

in the appropriate place on the monitor. The last wire resyn-

chronizes the media flow according to time stamps inserted by the first wire. It also provides the necessary hooks for an

additional interflow synchronizer module to ensure lip syn- chronization with a parallel audio flow.

Each resource object (the dome shaped symbols in the

figure) is connected by a private channel through the lan- guage engine, with its associated media module or wire.

This allows the resource object to control parameters in the module, as well as to receive events from it. All other

objects (the circle-shaped objects in the figure) can interact with the media module by sending messages to the respec- tive resource object. For instance, when a wire resource

object receives a change-in-rate message, it passes this request to the corresponding wire module. The wire module,

in turn, renegotiates an appropriate call-back schedule with the executive.

7. Implementation

Our prototype system implements a media-on-demand service with scalable QoS control. The prototype allows clients to dynamically change their QoS contracts and

receive feedback on QoS status, network usage and cost.

Server -_ Terminal

Fig. 8. Architecture.

M. Off ef al./Computer Communications 21 (1998) 334-349 343

Fig. 9 shows the hardware configuration and software com-

ponents of the prototype system.

The terminal hosts the multimedia browser. Compound

multimedia documents represent service providers that offer content over the network. The user can pick up an item from

the compound document and interact with it using active

tools. The compound documents and the active tools for

manipulating the multimedia content are implemented on

top of the CockpitView user interface framework. The VBR+ service described in Section 4 is implemented

at the server and the ATM network. The software modules

executing on the server consist of a QoS controller and a

media server. The QoS controller implements the function-

ality of the traffic service and media service modules

described in Section 4. The media server provides the actual video bit-stream.

The ATM driver has been extended to allow shaping

parameters to be dynamically changed. The soft call admis- sion controller (soft-CAC), implemented at switch control-

lers, is responsible for allocating the capacity of each output

port of the switch. Although we utilize ATM as the underlying network

technology, our current framework allows us to effectively change network transport options. Consequently, we are able to experiment with various combinations including

raw ATM, TCP/IP over ATM, UDP/IP over ATM, and even IP over 100 Mb Ethernet. Our ATM driver has also

been suitably modified so that the NIC shapers can be used

to manage IP flows. For IP over Ethernet, we use a software

shaper in the absence of network shaper support.

Both the multimedia browser and the media server are implemented as a collection of objects written in C++.

These objects register themselves with the executive to be

called back whenever an event for them arrives. Events can

either be the elapsing of a predefined time span or a file

descriptor becoming ready for reading. The executive main-

tains a list of all registered objects and schedules each of them for execution according to an algorithm which is

loosely based on earliest deadlinefirst (EDF). The schedul- ing requirements of objects are expressed in the form of a

contract with the executive. Currently, both the multimedia browser and the media

server are single UNIX processes. The executive is there-

fore in the same address space as the objects being sched-

uled, allowing the use of low-overhead function calls for

dispatching control [26].

The CockpitView user interface library is implemented in

C++ relying on the X-window system for low level gra- phics operations and event handling. In addition, the current

implementation uses the XIL library from Sun Microsys- tems for image scaling and decompression of JPEG (or MPEG-1) encoded video in order to achieve the desired

performance. The hardware testbed for this prototype currently consists

Fig. 9. Testbed and software component interaction

344 M. Ott et al./Computer Communications 21 (1998) 334-349

of a 170 Mhz SUN UltraSparc-I, a dual CPU 170 Mhz The current prototype uses a dedicated control channel

UltraSparc-II and a 180 Mhz Pentium Pro-bused PC between the QoS controller and the soft-QoS CAC to (Fig. 9). Each machine is connected to an NEC ATOMIS renegotiate the network contract. The switch controller pro- Model 5 switch over an OC-3 155Mbps ATM link using a cesses the renegotiation request and allocates a new band-

Zeitnet ATM NIC. The UltraSparc-II, which has Creator width and traffic descriptor to the connection. The QoS

three-dimensional graphics capability, hosts the media cli- controller receives the new traffic descriptor and changes ent. The UltraSpam- hosts the video server, while the PC is the NIC’s shaper parameters via the extensions provided by

used as the switch controller. our ATM driver.

7.1. Component interaction

All interactions between the various components of our prototype application are in the form of service contracts as

described in Section 2.

The multimedia browser interacts with the QoS controller

through contracts that allow it to dynamically change the audio

and video quality. In particular, the prototype implementation

allows the browser to control the desired frame rate, resolution and coding quality for the video stream. The browser also

sends feedback on the status of the connection to the QoS controller. This feedback is based on the location of the display area on the data landscape (priority), and on what possible

action the user is going to take next. For example, a video clip attached to a multimedia document in the form of a stamp-sized icon indicates that no immediate high bandwidth

streaming of video is likely. With the video clip detached from the document, the browser sends a standby message to the QoS

controller. This information is used by the QoS controller to

partially release unutilized, reserved bandwidth.

While the renegotiation is in progress, the QoS controller

uses source rate-control to maintain the driver’s buffer level. The rate-control dynamically changes the video quality by

first increasing the compression level of the stream and keeping the frame-rate and resolution fixed to the level spe-

cified in the service contract. The frame-rate and eventually

the resolution could be reduced if additional source rate

control is required beyond the range achievable with com-

pression level alone.

The QoS controller also sends feedback on the status of the service contract to the browser. For example, the proto-

type sends the connection’s bit-rate, its cost and service alerts. The connection’s bit-rate is updated with each band-

width renegotiation. The cost is derived from the current traffic descriptor. Service alerts are used to indicate compli-

ance with the service contract.

The QoS controller sets the video frame-rate, and com- pression level based on the client’s service contract. These

parameters, together with the desired movie and the desti-

nation, are specified in the media server’s contract. The media server generates a video bit-stream that best matches the QoS controller’s contract. The server can scale the bit-

stream by selecting among multiple tracks with different quantizations and resolutions. Currently, the server uses 10 JPEG-encoded tracks per movie. The tracks are obtained

by encoding the movie using five different quantization levels, and two resolutions, (640 X 480) and (320 X 240).

The server can dynamically switch tracks at video frame boundaries.

Within the multimedia browser and the media server, the

contract between each object and the executive defines the required execution profile. It includes parameters set by the

object such as a service time, an activation interval, an

estimated execution duration, a method for regular object activation, and an additional method for emergency situa-

tions. The executive provides feedback to the object through the delay parameter indicating deviations in service time

from the contract, and a parameter which provides the actual execution time measured by the executive. Whereas the

activation interval is easily set by the object, the execution duration of each invocation is hard to predict a priori. We therefore start with an estimate provided by the object and adjust the value based on measurements from actual activa-

tions of this object. For example, a video decoder object

which consistently overruns its allotted time span can ask the service provider for video frames to be encoded differ-

ently to make the decoding process simpler [27]. The QoS controller also interacts with the ATM driver

using extensions to support QoS control and bandwidth renegotiation. Through these extensions, the QoS controller

can set water-mark levels on the ATM driver’s buffer. The driver reports the buffer state when these water-marks are crossed. The QoS controller computes a traffic descriptor for the video bit-stream using the current status of the media contract, the target service contract and bit-stream’s traffic statistics obtained from the media server. The QoS control- ler renegotiates with the network when the computed traffic descriptor significantly differs from the one currently negotiated with the network. As explained in Section 4, renegotiated parameters include the UPC values and the satisfaction profile.

8. Experience

With the system fully integrated, we observed that our media browser allowed the presentation of several multi- media documents, including video clips at different levels of quality.

The media browser, when running on the UltraSparc II, was able to decode multiple video streams simultaneously in software. A typical session would have one video stream running in the front of the landscape, and two other videos further back. The implicit QoS control would typically keep the frame-rate near 30 fps on the video stream in front, and a

M. Ott et al./Computer Communications 21 (1998) 334-349 345

frame-rate near 15 or 20 fps on the video streams at the

back. The browser could also support two video streams

in front, running at close to 30 fps.

Since all the processing of media streams took place

within the same UNIX process, we were exercising our adaptive CPU scheduling in a realistic way. Each of the

streams progressed with little perceptible jitter.

Pushing a video clip to the background of the landscape

results in an implicit renegotiation between the browser and

the service provider. This results in the browser releasing

some of the resources taken by this video clip. Other objects could now reclaim resources previously given up due to

CPU overload in the terminal or limitations in network

bandwidth. In the browser some of the parameters contained

in the contract with the QoS controller are also used in

contracts with the executive. The frame rate and resolution

is used by the CockpitView component to specify a contract

with the executive for video frame decompression tasks. We have shown in Ref. [27] that the processing time required for a frame of MPEG compressed video is determined by the

frame type (I frames consume more CPU time than P and B frames) and the resolution. This enables the browser to

fairly accurately specify the contracts between the decoding

tasks for each video stream and the executive. If the result- ing schedule exceeds the available compute resource limits

indicated by an increase in the values of delay and measured duration (Fig. 9), the media browser can alter the contract

with the QoS controller in order to avoid missed deadlines.

Possible candidates for QoS parameter changes are in order of perceived quality degradation: MPEG frame type change, quantization, frame rate and resolution.

Furthermore, the CockpitView user interface library changes the interval for upcalls from the executive needed

for mouse event handling based on the state of operation.

While the user is dragging an object on the screen the ser- vice interval parameter in the contract with the executive is

set to 100 ms. This ensures that the user interface can repaint the moving object often enough for smooth drag-

ging. Without a dragging operation being in progress the interval is set to 400 ms which is sufficient to provide ade-

quate responsiveness for object pickup operations and other

tool manipulations. The user adjusts QoS parameter values with sliders on the

control cube of the ServiceMeter which are then possibly adjusted by the QoS controller on the server side through feedback sent to the client. This form of force feedback

allows the user to quickly explore the useful operating

range of a particular media object in terms of quality of service during the playback of the media content.

Figs. 10 and 11 show the operation of the QoS controller for browser-initiated and server-initiated renegotiations. We have implemented a Java applet to monitor the performance of the QoS controller. The window labeled Server Status is used to select the connection to be monitored. The four graphs show the performance of the selected connection in real-time. The performance metrics displayed are: input

rate to the ATM driver’s buffer, output rate traffic descriptor

used at the NIC’s shaper, current ATM driver state, and

video track used by the source rate controller. The input

rate is displayed as a maximum and mean rate in bits-per- second. The output rate is given in terms of peak and

sustained rate, also in bits-per-second. The driver state is represented using the following convention: one for under-

flow, two for normal, three and four for overflow. Finally, the 10 video tracks available at the media server are repre-

sented in a scale from 0 to 9, the lower the track number, the

better the video detail.

Fig. 10 shows a snap-shot of connection 1. This snap-shot

captures an example of a browser-initiated QoS renegotia- tion. The video track window shows that the browser rene-

gotiates its service contract by increasing the video quality

(from track seven to five). This generates an increase in input rate from about 0.5 Mbps to about 2.0 Mbps. When

the input rate increases above the currently allocated output rate, the driver state goes into overflow (level three). At that time, the QoS controller momentarily adjusts the video track

(from five to six) bringing back the driver state to normal (level two) while network renegotiation takes place. When

the soft-CAC grants the bandwidth increase, the output rate

is allowed to change from about 1 to 2 Mbps, again tracking the input rate. This increase allows the video track to be set to the desired level of five.

Fig. 11 shows a snap-shot of connection 2. This is an example of a server-initiated renegotiation triggered by a

significant bit-rate change in the compressed bit-stream. The input rate increases from about 2 to 4 Mbps while the video track is fixed at the desired level of five. This bit-rate

increase causes the driver state to go from underflow to overflow. Then, the network renegotiation allows the output rate to increase, and the driver state goes back to normal

again.

The prototype can also be used to provide application- level QoS on IP networks since the network service contract

is independent of the specific network layer. As mentioned before, the QoS controller renegotiates the traffic descriptor

and the satisfaction profile. Thus, IP networks could map the traffic descriptor to a specific bandwidth reservation and the

satisfaction profile to a form of traffic priority. We have tested the prototype implementation using IP on an Ethernet network. In this scenario, the soft-CAC is set to dynamically allocate the available Ethernet capacity to each connection using its satisfaction profile. The ATM driver that controls

the NIC’s cell-level shaper is replaced by a packet-level

shaper implemented in software above the transport level. The shaper spaces the inter-packet departure time of video

packets to match the traffic descriptor. When congestion occurs, we have observed that the soft-CAC allocates the available bandwidth according to the connection’s service contract and softness profiles. Our QoS framework there- fore, allows the browser application to maintain the same overall functionality and control independently of the underlying network.

M. Ott et al./Computer Communications 21 (1998) 334-340

Fig. 10. Performance of the QoS controller on a browser-initiated QoS renegotiation

Some performance statistics were collected to evaluate

the processing requirements introduced by bandwidth rene- gotiation within our proof-of-concept implementation. We observed that each video stream requests bandwidth rene- gotiations about once every 3 s, on average. A Java imple- mentation of the soft-CAC was used on the 180 MHz

Pentium-Pro switch controller running Linux. In this set-

up, it takes less than 5 ms for the soft-CAC to process a renegotiation request. Thus, within the current test-bed, at

least 200 renegotiation requests could be processed by the switch controller.

Since there are 16 ports on the switch, the external PC controller can handle about 12 renegotiations per second per port. Thus, at an average of 3 s-per-renegotiation, the con- troller can support up to 36 VBR+ connections per port. Simulation results show that this number of video connec- tions will result in 85-90% utilization of the port capacity [ 141. Considering that the renegotiation performance can be significantly improved using a just-in-time compiler,

native methods or a multiprocessor controller, we conclude

that the associated renegotiation processing seems to be within the capabilities of current PC-based external switch controllers. However, these numbers consider only

the time required to compute an allocation. They do not consider the time required for the renegotiation request to reach the soft-CAC via signaling. We are currently

implementing extensions to the ATM Forum signaling

specification to support VBR+ renegotiation. Once implemented, we will assess in more detail the overall

processing requirements and performance of bandwidth renegotiation.

9. Conclusions

This paper presented an architecture for adaptive QoS and its application to multimedia systems design. Based on our strong belief that future multimedia systems will be both distributed and heterogeneous, and the experience gained from building several multimedia prototypes in the past,

M. Otr et ui./Compu~er Communicurions 21 (199X) 334-349 347

Fig. I 1. Performance of the QoS controller on a server-initiated bandwidth renegotiation

we advocate an architectural framework for introducing

adaptive QoS into every layer of such systems. Quality of

service is specified by contracts which are established between clients and service providers using a single, generic

API. The recursive application of this mechanism estab-

lishes a service hierarchy which constitutes QoS for the system as a whole.

The experience with the implemented prototype allowed us to gain insight and evaluate the need for contract rene-

gotiation at all levels of the architecture. For example, at the

user interface, we observed how limited screen real estate

and the shifting focus during a typical machine-user inter- action offer ample opportunities for media tasks to change their resource demands. At the network-level, we observed how a dynamic network service contract can effectively support user-initiated and source-initiated renegotiations. A novel network service is used that allows sofr bandwidth renegotiation based on media-specific satisfaction profiles.

Finally, we showed that the same architectural framework

can be applied to the processor as a compute resource.

During the course of experimenting with our prototype system, we observed the ease with which we could quickly

build an application that could run under different environ-

mental conditions. Our proposed architecture is generic and applicable to a variety of different resource domains. How-

ever, we are yet to fully test its power in building higher level abstractions. As part of our future work, we are inter-

ested in exploring the various manifestations of abstract

services.

References

[I] A.C. Aurrecoechea, A. Campbell, L. Hauw, A review of QoS archi-

tectures In Proceedings 4th IFIP International Workshop on Quality of

Service. Paris. France. March, 1996.

348 M. Ott et al./Computer Communications 21 (1998) 334-349

[2] R. Steinmetz and C. Engler. Human perception of media synchroniza-

tion. Technical Report 43.9310, IBM European Networking Center,

Heidelberg, 1993.

[3] R.T. Apteker et al., Distributed multimedia: user perception and

dynamic QoS. in Proceedings IS & T/SPIE Symposium on Electronic

Imaging: Science and Technology, Workshop on High-speed Net-

working and Multimedia Computing, 1994.

[4] A. Vogel, B. Kerherve, G. von Bochman, J. Gecsei, Distributed multi-

media and QoS: a survey, IEEE Multimedia Magazine 2 (2) (1995)

10-19.

[S] G. Michelitsch, Cockpitview: a user interface framework for future

network terminals. In Proceedings CHI 1996. ACM, Vancouver, BC,

Canada, 1996.

[6] G.G. Robertson, S.K. Cardock, J.D. Mackinlay, Information visuali-

zation using 3D interactive animation, Communication of the ACM

36 (4) (1993) 57-71.

[7] L. Staples, Representation in virtual space: visual convention in the

graphical user interface. In Proceedings INTERCHI’93. ACM, 1993.

[8] R.B. Smith. The altenate reality kit: an animated environment for

creating interactive simulations. In Proceedings 1986 IEEE Computer

Society Workshop on Visual Languages, 1986.

[9] D. Reininger and D. Raychaudhuri. Bit-rate characteristics of a VBR

MPEG video encoder for ATM networks. In Proceedings of the IEEE

International Conference in Communications, ICC’93, Geneva, Swit-

zerland, May, 1993.

[lo] D. Raychaudhuri, S. Biswas, D. Reininger, Bandwidth allocation for

VBR video in wireless ATM links. In Proceedings of the IEEE Inter-

national Conference in Communications, ICC’97, Montreal, Canada,

June, 1997.

[ 1 I] The ATM Forum, ATM User-Network Interface (UNI) Signalling

Specification. ATM Forum/95 1434R11, February, 1996.

[I21 M. De Prycker, Asynchronous Transfer Mode, Solution for Broad-

band ISDN, 2nd edn. Ellis Horwood, Chichester, 1993.

[I 31 D. Reininger, D. Raychaudhuri, J. Hui, Dynamic bandwidth allocation

for VBR video over ATM networks, IEEE Journal on Selected Areas

in Communications. Special Issue on Video Delivery to the Home 14

(6) (1996) 1076-1086.

[I41 D. Reininger and R. Izmailov, Soft quality-of-service for VBR+

video. In Proceedings International Workshop on Audio-visual ser-

vices over Packet Networks, AVSPN’97, Aberdeen, Scotland, Sep-

tember, 1997.

1151 S. Shenker, Fundamental design issues for the future intemet, IEEE

Journal on Selected Areas in Communications 13 (7) (1995) I176-

1188.

[I61 C.A. Waldspurger, W.E. Weihl, Lottery scheduling: Flexible propor-

tional-share resource management. In Proceedings First Symposium

on Operating System Design and Implementation, November, 1994.

[I71 C.A. Waldspurger, W.E. Weihl, Stride scheduling: deteriministic pro-

portional-share resource management. Technical Report MIT/LCS/

TM-528, MIT Laboratory for Computer Science, Cambridge, MA,

199.5.

[ 181 C.W. Mercer, S. Savage, H. Tokuda, Processor capacity reserves:

operating systems for multimedia applications. In Proceedings IEEE

International Conference on Multimedia Computing and Systems,

May, 1994. [I91 B. Ford, S. Susarla, CPU inheritance scheduling. In Proceedings

Second Symposium on Operating Systems Design and Implementa-

tion (OSDI 96), Seattle, WA, October, 1996.

[20] P. Goyal, X. Guo, H.M. Vin, A hierachical CPU sheduler for multi-

media operating systems. In Proceedings Second Symposium on

Operating Systems Design and Implementation OSDI 1996, Seattle,

WA, October, 1996.

[2l] C.L. Compton, D.L. Tennenhou, Collaborative load shedding for

media-based applications. In Proceedings International Conference

on Multimedia Computing Systems, May, 1994.

[22] M.B. Jones, J.S. Barrera III, A. Forin, P.J. Leach, D. Rosu, M. Rosu,

An overview of the Rialto real-time architecture. In Proceedings

Seventh ACM SIGOPS European Workshop, Connemara, Ireland,

September, 1996.

[23] J. Nieh, M.S. Lam, The design of SMART: a scheduler for multimedia

applications. Technical Report CSL-TR-96-697, Computer Systems

Laboratory, Stanford University, CA, June 1996.

[24] D.K.Y. Yau, S.S. Lam, Adaptive rate-controlled scheduling for multi-

media applications. In Proceedings Fourth ACM International Multi-

media Conference, MULTIMEDIA ‘96, November, 1996.

[25] M. Ott, J. Heam, Plug-and play with wires. In Proceedings Tel Work-

shop 95, Toronto, Canada, July, 1995.

1261 J. Ousterhout, Why threads are a bad idea (for most purposes). Invited

talk at the USENIX Technical Conference, 1996.

I271 D. Raychaudhuri, D.Reininger, M. Ott, G. Welling, Multimedia pro-

cessing and transport for the wireless personal terminal scenario. In

Proceedings SPIE Visual Communications and Image Processing

Conference, VCIP’95, Taipei, Taiwan, May, 1995.

[28] R. Gopalakrishna, G.M. Parulkar, Real-time upcalls: a mechanism to

provide real-time processing guarantees. Technical Report WUCS-

95-06, Washington University, September, 1995.

Maximilian Ott is a Senior Research Staff Member

in the System Architecture Department of NEC’s C&C Research Laboratories. He is interested in the impact of ubiquitous computing and high-

speed networks on our lives. However, he mainly works on the more concrete issues of adaptive and

scalable distributed multimedia systems.

Georg Michelitsch is a research engineer with the

systems architecture department at the NEC C&C

Research Laboratories in Princeton, USA. His

research interests include user interface design,

computer supported cooperative work, and compu-

ter architectures for multimedia computing. Before joining NEC, he has been a research scientist with

the Tokyo Information Systems Research Labora- tory of Matsushita Electric, Japan, where he was responsible for the design and implementation of a

multimedia desktop conferencing system. Georg holds a Master of

Science degree in computer science from the University of Technology in Vienna, Austria.

M. Ott et al./Computer Communications 21 (1998) 334349 349

Daniel J. Reininger received the B.S.E.E and

M.S.E.E degrees from the Illinois Institute of Tech-

nology (IIT), Chicago in 1990 und 1991. respec-

tively. From 1991 to 1994 he was with the David

Sarnoff Research Center, Princeton NJ, first as an

Associate Member of the Technical Stufffrom 1991

to 1992, and as a Member of the Technicul Staff

from 1992 to 1994. At Sarnofi he worked on a

variety of video communication topics including

packet video, VBR on ATM, MPEG coding,

HDTV and DBS. Since March 1994. he has been with NEC USA, C&C Research laboratories, Princeton, NJ, first as Senior Research

Associatefrom 1994 to 1997, and currently as a Research StaffMember

in the System Architecture Group. He has authored approximately 25

technical papers und holds two US patents. He has recently completed

the Ph.D degree at Rutgers University, New Brunswick, NJ. Mr Rein-

inger received IIT’s Electrical Engineering Departamental Fellowship

in 1990 and was the recipient of the David Sarnofl Researc,h Center

Outstanding Achievement Award in 1992.

Girish Welling is a Senior Research Associate at

the C\LC Research Laboratories ut NEC in Prin-

ceton, NJ. He received a Bachelors degree in Com-

puter Science and Engineering from the Indian Institute of Technology. Madras, India. He com- pleted a Masters from Rutgers University, New Brunswick, NJ, where he is currently pursuing

his Ph.D. His research interests include mobile computing, distributed object systems, and multi- media. Girish received the Rutgers University

Graduate Excellence Fellowship in 1990 and 1991.