9
30 PERVASIVE computing Published by the IEEE CS n 1536-1268/13/$31.00 © 2013 IEEE EDGE OF THE CLOUD Leveraging Cloudlets for Immersive Collaborative Applications M obile devices pervade our daily lives, with many people using a smartphone to access email, a tablet to read a book, or a laptop to work on the train. Due to recent advances in hardware technology, mobile devices now have not only more memory and processing power but also numerous sensors. In com- bination with developments in near-to-eye display technologies, such as Google Glass, these advances will lead to the emergence of mobile immersive applica- tions, such as augmented real- ity. 1 By combining virtual 3D objects with the real world, such applications will submerge the user in a new, mixed-reality world. These applications use complex algo- rithms for camera tracking, object recogni- tion, and so on, requiring considerable com- putational resources. However, despite increasing hardware capabilities, mobile devices remain resource poor compared to their desktop and server counterparts. Due to additional limitations concerning weight, size, battery autonomy, and heat dissipation, mobile devices can’t compete with fixed hardware. These resource limitations imply considerable obstacles for realizing mobile immersive applications, which typically require heavy and real-time processing. Developers must cope with a high level of device heterogeneity, meaning they either develop for the lowest common denominator or tailor the application for each device by providing a specific configuration. Running such resource-intensive applications on a mobile device requires offloading the application, or parts thereof, to resources in the network, also known as cyber foraging. 2 The emergence of cloud computing has made processing power available in the network as a commodity. 3 However, due to the high and variable latency to distant datacenters, the cloud isn’t always suitable for code offload, especially for real-time immersive applications. Therefore, Mahadev Satyanarayanan and his colleagues introduced “cloudlets”— infrastructure deployed in the vicinity of the mobile user. 4 (For more information on related work in cyber foraging, see the sidebar.) Here, we exploit cloudlets in developing a component-based framework that addresses two major challenges that current cyber foraging systems face in supporting immersive applications. To enable immersive applications on mobile devices, the authors propose a component-based cyber foraging framework that optimizes application-specific metrics by not only offloading but also configuring application components at runtime. It also enables collaborative scenarios by sharing components between multiple devices. Tim Verbelen, Pieter Simoens, Filip De Turck, and Bart Dhoedt Ghent University

Leveraging Cloudlets for Immersive Collaborative Applications

  • Upload
    bart

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

30 PERVASIVE computing Published by the IEEE CS n 1536-1268/13/$31.00 © 2013 IEEE

E d g E o f t h E C l o u d

leveraging Cloudlets for Immersive Collaborative Applications

M obile devices pervade our daily lives, with many people using a smartphone to access email, a tablet to read a book, or a laptop

to work on the train. Due to recent advances in hardware technology, mobile devices now have not only more memory and processing power but also numerous sensors. In com-bination with developments in near-to-eye

display technologies, such as Google Glass, these advances will lead to the emergence of mobile immersive applica-tions, such as augmented real-ity.1 By combining virtual 3D

objects with the real world, such applications will submerge the user in a new, mixed-reality world. These applications use complex algo-rithms for camera tracking, object recogni-tion, and so on, requiring considerable com-putational resources.

However, despite increasing hardware capabilities, mobile devices remain resource poor compared to their desktop and server counterparts. Due to additional limitations concerning weight, size, battery autonomy, and heat dissipation, mobile devices can’t compete with fixed hardware. These resource limitations imply considerable

obstacles for realizing mobile immersive applications, which typically require heavy and real-time processing. Developers must cope with a high level of device heterogeneity, meaning they either develop for the lowest common denominator or tailor the application for each device by providing a specific configuration.

Running such resource- intensive applications on a mobile device requires offloading the application, or parts thereof, to resources in the network, also known as cyber foraging.2 The emergence of cloud computing has made processing power available in the network as a commodity.3 However, due to the high and variable latency to distant datacenters, the cloud isn’t always suitable for code offload, especially for real-time immersive applications. Therefore, Mahadev Satyanarayanan and his colleagues introduced “cloudlets”—infrastructure deployed in the vicinity of the mobile user.4 (For more information on related work in cyber foraging, see the sidebar.)

Here, we exploit cloudlets in developing a component-based framework that addresses two major challenges that current cyber foraging systems face in supporting immersive applications.

To enable immersive applications on mobile devices, the authors propose a component-based cyber foraging framework that optimizes application-specific metrics by not only offloading but also configuring application components at runtime. It also enables collaborative scenarios by sharing components between multiple devices.

Tim Verbelen, Pieter Simoens, Filip De Turck, and Bart DhoedtGhent University

pc-12-04-ver.indd 30 30/09/13 4:25 PM

OCTOBER–DECEMBER 2013 PERVASIVE computing 31

Cyber foraging ChallengesThe first challenge stems from the fact that most cyber foraging systems focus on optimizing generic metrics, such as energy usage5,6 or execution time.7,8 Immersive applications, however, often must execute complex image-processing algorithms in a timely manner, so it makes more sense to consider application-specific metrics, such as the frame rate, image resolution, or the robustness of camera tracking. From a developer’s perspective, the goal then is to achieve an acceptable frame rate at the highest possible quality for each device, where “quality” is related to the image resolution and tracking configuration.

To address this issue, our proposed cyber foraging framework not only decides on offloading but also considers application-specific configuration parameters to enhance application quality. We adopt a component-based approach, where application developers define software components with their dependencies, configuration parameters (image resolution and tracking parameters), and constraints (the minimal required frame rate). These components are then distributed and configured at runtime to optimize the user experience and meet all performance constraints.

A second challenge is that existing cyber foraging solutions only decide on offloading from one mobile device to one or more servers. However, when multiple devices are connecting to one or more servers, a global optimal solution is preferred over having all devices compete for the available network or compute resources. Moreover, in the case of immersive applications, devices in the same neighborhood will often process the same or similar data. For example, regarding object recognition, colocated devices will recognize the same objects and could benefit from one another. Sharing off loaded components

between multiple devices enables collaborative immersive application scenarios.

use Case: Parallel tracking and MappingWith the breakthrough of head-mounted displays, a new type of immersive application becomes possible. In museums, in addition to listening to an explanation through headphones, visitors can be guided using visual annotations that bring scenes to life and immerse them in ancient history. In stores, an immersive shopping application can guide customers, annotating products on their shopping list. On playgrounds, children can play with virtual toy cars or spaceships on a table, colliding with real obstacles.

All of these applications exemplify our use-case scenario, in which the applications must

• know the position of the glasses with respect to the world to correctly posi-tion the overlay;

• build a correct model of the world so the overlay can “blend in” without becoming too intrusive; and

• perform reliable object recognition to identify which objects to annotate.

When multiple users run such applications in the same environment, they can clearly benefit from collaboration—they all require a model of the same environment and will likely recognize the same objects.

Current state-of-the-art algorithms providing these features rely on visual information. The Parallel Tracking and Mapping (PTAM) algorithm by Georg Klein and

David Murray tracks the position of a camera with respect to the world based on detected visual features.9 In parallel with the tracking, a model of the world’s visual features is constructed in a mapping phase. Robert Castle and David Murray combined this approach with visual object recognition to recognize objects and localize them in the map.10 However, due to the complexity of these algorithms, current high-end mobile devices still fall short of executing this in a timely manner, especially when scaling to large environments in which many objects must be recognized.

Component-Based Cyber foragingOne option for running resource-intensive immersive applications on a mobile device is to run the application on a remote server and use the device as a thin client. However, the device

then must continuously send camera frames to the server and receive rendered frames, which would require considerable bandwidth and wouldn’t scale to multiple devices in the same wireless network. Therefore we adopt a component-based approach, splitting the application into several loosely coupled software components that each can be offloaded or configured to optimize the user experience.

We identified different com -ponents for an augmented reality application (see Figure 1). In addition to a component for fetching video frames from the camera hardware (VideoSource) and rendering an overlay on the screen (Renderer), there

Our proposed cyber foraging framework not

only decides on offloading but also considers

application-specific configuration parameters.

pc-12-04-ver.indd 31 30/09/13 4:25 PM

32 PERVASIVE computing www.computer.org/pervasive

EdgE of thE Cloud

are three components for each main functionality. The Tracker component processes the camera frames and estimates the camera position with respect to the world based on a number of visual feature points. The more feature points we use, the more stable the tracking. Increased feature points also makes tracking the camera more robust during sudden movements. In parallel, the Mapper creates a model of the world by identifying new feature points and estimating their position, which can then be used for tracking. The ObjectRecognizer tries to recognize known objects in

the world and notifies the Renderer of their 3D position when found.

When executing such an augmented reality application, optimizing generic metrics such as energy or execution time isn’t a silver bullet. It’s more important to optimize some application-specific metrics. For example, to achieve smooth camera tracking, the Tracker should process 15 to 20 frames per second. On the other hand, higher camera resolution or more tracked feature points will also enhance the user experience. Therefore, our framework lets developers specify application-performance constraints

as well as configuration parameters that influence user quality. To meet the imposed constraints, the framework can not only offload application components but also configure their quality parameters.

In our use case, we defined three configurable parameters. Increasing the camera resolution or number of features used to track the camera position increases quality, at the cost of more processing time. The images can also be fetched either as raw image data or as a compressed JPEG format. The latter image format requires extra processing time for encoding and decoding the

C yber foraging has been a research topic for more than a

decade.1 Early systems, such as Spectra,2 provide an API

that lets programmers identify methods that should be executed

remotely. At runtime, the system selects the best option using

a greedy heuristic, optimizing a utility function that considers

the execution time, energy usage, and fidelity. Chroma follows a

similar approach but uses a tactics-based decision engine, opti-

mizing a user-specific utility function.3

Later systems often used a bytecode format running in an

application virtual machine (VM) to enable more transpar-

ent code offloading. MAUI offloads methods in the Microsoft

.Net framework, aiming to optimize energy usage.4 An integer

linear programming (ILP) solver decides whether to offload a

method call. Scavenger offloads Python methods, minimizing

the execution time depending on the method call arguments

using history-based profiles.5 Both systems require code anno-

tations from the application developer to identify offloadable

methods.

AIOLOS is a cyber foraging framework on Android that takes

a component-based approach, replicating OSGi components on

Related Work in Cyber foraging

Table a an overview of cyber foraging systems and their characteristics (based on data from Jason Flinn1).

Objective Granularity Decision algorithm Programming model

Spectra2 Execution time, energy, and fidelity

Method Greedy heuristic Use Spectra API

Chroma3 Execution time and fidelity

Method Tactics-based selection Define tactics

MAUI4 Energy Method Integer linear programming Code annotations

Scavenger5 Execution time OSGi component History-basedprofile

Code annotations

AIOLOS6 Execution time Sprout component History-based profile Use OSGi or code annotations

Odessa7 Makespan and throughput

Thread Incremental greedy algorithm Use Sprout framework

CloneCloud8 Execution time or energy

Thread Integer linear programming None

Comet9 Execution time and energy

Thread Threshold-based scheduler None

pc-12-04-ver.indd 32 30/09/13 4:25 PM

OCTOBER–DECEMBER 2013 PERVASIVE computing 33

frames but can lower the required bandwidth when offloading.

By adopting a component-based approach, collaborative scenarios can easily be constructed by sharing application components. In the parallel tracking and mapping use case, for example, the Mapper component can be shared between multiple mobile devices in the same physical environment. This way, all devices receive the same world model to track the camera position. Because the model is updated and refined using camera images from multiple devices, the model will often be more accurate than one created by just one device.

An Autonomic Cloudlet Management PlatformTo benefit from the effort of developing applications as a set of loosely coupled components, we need a platform that can manage and configure components at runtime and distribute them among available resources in the near vicinity—the cloudlet. We propose the hierarchical architecture depicted in Figure 2, which operates on three management levels: execution environment, node, and cloudlet.

Execution EnvironmentAn execution environment can be thought of as a container in which

components can be deployed. When a component is deployed, the execution environment manages the component life cycle and resolves dependencies with other components. Each component can expose configuration parameters that can be adapted by the execution environment at runtime.

The execution environment also monitors method calls between components, capturing the size of the arguments, return value, and execution time. This monitoring information is then sent to the node level, where the information is aggregated and used as input for decision making on the cloudlet level.

the remote server.6 At runtime, method calls are forwarded to

either the local or the remote component instance using a profile

built from monitoring information, similar to Scavenger. Pro-

grammers can develop their applications as a number of OSGi

components or use code annotations to automate this process.

Odessa uses the Sprout component framework, employing an

incremental greedy strategy,7 trying to exploit parallelism to

optimize the throughput and makespan.

CloneCloud operates at a lower level,8 migrating at thread

granularity in the Dalvik virtual machine. The CloneCloud

partitioner automatically identifies offloadable parts of the appli-

cation in an offline stage, using static and dynamic code analysis

and without the need for programmer annotations. At runtime,

the partition minimizing the execution time is selected using an

ILP solver. Comet also offloads threads in the Dalvik VM,9 but

instead of rewriting the software offline, an online approach is

presented using distributed shared memory (DSM). This lets

threads migrate at runtime using a threshold-based scheduler, at

the cost of more communication overhead for keeping the DSM

synchronized.

Our framework uses an application-specific utility function,

similar to Chroma,3 to optimize the application quality. We

adopt a component-based approach, as with AIOLOS,6 which

gives a good granularity to offload while keeping the monitor-

ing overhead low. In contrast to existing systems (see Table A),

our framework not only adapts the deployment of parts of the

application but also autonomically adapts configuration param-

eters of the components to adhere to constraints given by the

developer and the device context. A heuristic search algorithm

allows fast optimization of both the deployment and configura-

tion for all connected devices in the vicinity. Second, the frame-

work takes into account collaborative components that can be

shared among multiple users. This way, users can benefit from

computations already done by others, thus reducing the overall

computational load.

REfEREnCES

1. J. Flinn, “Cyber Foraging: Bridging Mobile and Cloud Computing,” Synthesis Lectures on Mobile and Pervasive Computing, Morgan & Claypool Publishers, 2012.

2. J. Flinn, S. Park, and M. Satyanarayanan, “Balancing Performance, Energy, and Quality in Pervasive Computing,” Proc. 22nd Int’l Conf. Distributed Computing Systems (ICDCS 02), IEEE CS, 2002, pp. 217–226.

3. R.K. Balan et al., “Tactics-Based Remote Execution for Mobile Computing,” Proc. 1st Int’l Conf. Mobile Systems, Applications, and Services (MobiSys 03), ACM, 2003, pp. 273–286.

4. E. Cuervo et al., “Maui: Making Smartphones Last Longer with Code Offload,” Proc. 8th Int’l Conf. Mobile Systems, Applications, and Services (MobiSys 10), ACM, 2010, pp. 49–62.

5. M. Kristensen, “Scavenger: Transparent Development of Efficient Cyber Foraging Applications,” Proc. 2010 IEEE Int’l Conf. Pervasive Computing and Communications (PerCom 10), IEEE, 2010, pp. 217–226.

6. T. Verbelen et al., “AIOLOS: Middleware for Improving Mobile Application Performance through Cyber Foraging,” J. Systems and Software, vol. 85, no. 11, 2012, pp. 2629–2639.

7. M.-R. Ra et al., “Odessa: Enabling Interactive Perception Applications on Mobile Devices,” Proc. 9th Int’l Conf. Mobile Systems, Applications, and Services (MobiSys 11), ACM, 2011, pp. 43–56.

8. B.-G. Chun et al., “Clonecloud: Elastic Execution between Mobile Device and Cloud,” Proc. 6th Conf. Computer Systems (EuroSys 11), ACM, 2011, pp. 301–314.

9. M.S. Gordon et al., “Comet: Code Offload by Migrating Execution Transparently,” Proc. 10th Usenix Conf. Operating Systems Design and Implementation (OSDI 12), Usenix, 2012, pp. 93–106.

pc-12-04-ver.indd 33 30/09/13 4:25 PM

34 PERVASIVE computing www.computer.org/pervasive

EdgE of thE Cloud

NodeEach device runs a node agent that manages the device as a whole. The node agent aggregates monitoring information from all the execution environments running on the device, and it sends these to the cloudlet agent. In addition to the information about the execution environments, the node agent provides device-specific information, such as the device capabilities (processor speed, number of processor cores, and so on).

CloudletAmong all devices in the network, one is chosen to host the cloudlet agent. The cloudlet agent receives monitoring information about all the nodes and has a global overview of all execution environments, their components, and how these components interact. This information is then analyzed and used as input for a global optimization planning algorithm that calculates both the deployment and configuration for all components.11 This solution is then enforced, and

new monitoring information is used again as feedback. In this way, we realize an autonomic feedback loop that can adapt to changing context while still striving for a stable deployment to avoid costly component migrations.

The device that runs the cloudlet agent can be predefined (if a fixed server is available in the network) or can be determined at runtime as part of the discovery process. When first initialized, each device runs both a node and a cloudlet agent. When another device running a cloudlet agent is discovered, the most suitable of the two devices (the one with the most CPU power) is chosen as the host for the cloudlet agent, and the other one joins as a node. This way, an ad hoc cloudlet can be constructed, where the strongest device runs the cloudlet agent performing global optimization for all others.

The cloudlet agent also identifies collaborative components that can be shared among multiple users. By offloading collaborative components to the most resourceful

node available in the network and redirecting all users’ calls to this node, the cloudlet agent allows users to not only save computational resources but also to gain information from the input of others. As Figure 2 shows, in our use-case example, the Mapper and ObjectRecognizer components are shared between two devices, so each user receives updates to the map provided by the other, and objects only have to be recognized once in the scene to be annotated on each device.

dynamic Configuration and deploymentTo illustrate the effectiveness of dynamic configuration and distri -bution of application com ponents, we built a prototype framework. Our implementation builds on OSGi, a standard for creating modular applications in Java that we also used in previous cyber foraging implementations.8 To allow easy application development targeting our platform, we use a programming model based on annotations. By adding annotations to the source code, the developer can define software components, their dependencies, and configuration parameters to be optimized, or he or she can define application-specific constraints. Figure 3 shows the annotated source code of the Tracker class.

The developer defines the Tracker component with a dependency to the Mapper component (the @Reference annotation in Figure 3) and with a configurable parameter that specifies the number of features to track (the @Property annotation). To impose a minimum number of frames processed per second, the Tracker must process a frame within 60 ms, as stated by the @Constraint annotation. These annotations are processed at build time, generating OSGi components8 as well as XML

ImageformatResolution

Video source Renderer

Objectrecognizer

MapperTracker

#Features

Figure 1. Identifying loosely coupled software components in an example immersive application. In addition to a component for fetching video frames from the camera hardware (VideoSource) and for rendering an overlay on the screen (Renderer), there are three components—Tracker, Mapper, and Object Recognizer—for each main functionality.

pc-12-04-ver.indd 34 30/09/13 4:25 PM

OCTOBER–DECEMBER 2013 PERVASIVE computing 35

descriptors defining the configurable properties and constraints used by the framework at runtime.

Our cloudlet agent implements an autonomic feedback loop that periodically gathers monitoring information from all nodes and execution environments. It uses this information to calculate a new deployment and configuration. To predict the change in behavior when changing configuration parameters, the cloudlet agent is bootstrapped with monitoring information gathered in an offline profiling stage.

If there are N migratable com -ponents, M available devices, and K configuration parameters (each with ki possible parameter values), the total number of possible solutions becomes MN × k1 × . . . × kK. Given that the solution space rapidly grows with

Cloudlet agent

Execution environment

Execution environmentExecution environment

Nodeagent

Nodeagent

Nodeagent

Mapper Tracker

Videosource

RendererTracker

RendererVideosource

Objectrecognizer

Analyze Plan

Monitor RedeployReconfigure

Figure 2. Overview of the cloudlet platform. execution environments are containers for application components. all communication between components passes through the execution environments, enabling transparent monitoring and offloading. The node agents aggregate monitoring information for each device. The cloudlet agent analyzes the monitoring information and decides whether the deployment or configuration must be adapted.

package ptam.tracker;

@Componentpublic class Tracker implements TrackerService, VideoListener {

@Property(values=200,500,1000,1500,2000)private int featurePoints = 1000;

@Reference(policy=dynamic)private MapperService mapper;

@Constraint(maxTime=60)public void processFrame(byte[] data){

// process frame...

}

}

Figure 3. a sample annotated Java source file defining the Tracker component, a configurable property (the number of tracked feature points), and a timing constraint (frames must be processed in 60 ms).

pc-12-04-ver.indd 35 30/09/13 4:25 PM

36 PERVASIVE computing www.computer.org/pervasive

EdgE of thE Cloud

the number of devices, components, and parameters, we propose a greedy search heuristic that calculates close-to-optimal solutions in general cases and has yielded the optimal solution for our use case.11

Our framework is implemented in Java, so it runs on Android as it would on regular desktop or server machines. We built the application components as OSGi modules in Java and used native C libraries for the complex computer vision routines. The native libraries are included, compiled for both ARM as x86 architectures, to provide cross-platform components.

The developer provides three parameters to configure: the input size of the camera frames (640 × 480 or 320 × 240), the format of the camera frames (raw grayscale or JPEG), and the number of feature points to track in the frame (2,000, 1,500, 1,000, 500, or 200). Limiting the time required to process a frame for tracking imposes a constraint to ensure a tracker frame rate between 15 and 20 frames per second. Maximizing the quality is reflected as maximizing both the camera resolution and the number of tracked feature points.

We evaluate the framework on a Samsung Galaxy S2 Android device, equipped with a dual core 1.2-GHz processor and a laptop with an Intel Core 2 Duo, clocked at 2.26 GHz. We connected the device and laptop using a virtual network over USB, which let us set the available bandwidth on the link to show how different configurations and deployments are chosen in different scenarios.

Every five seconds, the execution environment sends monitoring information to the cloudlet agent, which then decides whether an action is needed. Using this monitoring information, we calculated the number of processed frames per second, which is plotted in Figure 4.

The experiment starts with a bandwidth of 100 Mbps, which lets the device send the raw camera frames (with a resolution of 640 × 480) over the network and offload the Tracker, Mapper, and Object Recognizer to the laptop. After two minutes, the bandwidth lowers to 20 Mbps, resulting in a drop to five frames per second. The framework thus adapts the configuration of the VideoSource to change to the MJPEG

format. Extra CPU cycles are then used to compress and decompress the video frames to JPEG, which lowers the required bandwidth. The tracking is still executed on the laptop, which keeps the number of tracked feature points (and thus quality) high. When the bandwidth is further reduced to five Mbps, the frame rate drops again to five frames per second, and the Tracker component is moved back to the smartphone. Because of the limited processing power on the smartphone, only 500 points can be tracked now, lowering the tracking quality.

Figure 4 shows that it takes between 10 (first adaptation) and 20 (second adaptation) seconds to fully recover the frame rate. This is due to the fact that the monitoring information is only sent every five seconds, so it takes that long to reevaluate the deployment and configuration and make adaptations.

During the second adaptation, the longer migration time stems from having to migrate the Tracker component from the server to the client, and send its state over the network, while the network bandwidth is scarce. To further

0

25

Trac

ked

FPS

20

15

Bandwidth = 100 Mbps

Config 1 - Offload tracker, mapper, object recognition - Grayscale raw video - 2000 tracked points

Config 2 - Offload tracker, mapper, object recognition - MJPEG video - 2000 tracked points

Config 3 - Offload tracker, mapper, object recognition - MJPEG video - 500 tracked points

Bandwidth = 20 Mbps Bandwidth = 5 Mbps10

5

060 120 180

Time (s)

240 300 360

Figure 4. as the bandwidth decreases, the configuration quality is lowered to keep the achieved frame rate between 15 and 20 frames per second. The configuration varies between 10 and 20 seconds.

pc-12-04-ver.indd 36 30/09/13 4:25 PM

OCTOBER–DECEMBER 2013 PERVASIVE computing 37

reduce the adaptation time, we could notify the cloudlet agent earlier when there’s a drop in performance instead of waiting the full five-second period. To reduce the migration time, we could periodically synchronize the state of offloaded components with the clients, such that when the component is migrated back, only an incremental state change must be transferred.

Collaborative Immersive ApplicationsMultiple users in the same environment typically look at the same scene, track the same environment, and need to recognize the same objects, so it makes sense to share computation results. By adopting the component-based application offloading model, such collaborations can be easily implemented by sharing components between devices. When the developer marks a component as “shared,” the framework can transparently redirect calls from different devices to the same instance of this component.

Figure 5 shows an example application, where two devices share the same Mapper and Object Recognizer, which are offloaded to the laptop. The map of the environment is generated twice as fast (because both devices send updates to the Mapper), and processing is performed only once (instead of twice—once for each device).

Using multiple devices to add feature points to the map also leads to a more complete map of the environment. Each device now has a complete overview of the environment, whereas in the noncollaborative case, the map is limited to the device’s own viewpoint. Figure 6 shows a map of a desk environment generated with three devices. Each device observes the desk from a different viewpoint, so each maps different (overlapping) parts of the desk.

Figure 5. by sharing components, multiple devices can track and expand the same map and recognize the same objects.

(a)

(b)

Figure 6. Three devices explore (a) the top of a desk, resulting in (b) mapped feature points and a recognized object. by sharing the detected feature points of all devices, more feature points from multiple viewpoints are added to the map, leading to more robust tracking and faster map expansion.

pc-12-04-ver.indd 37 30/09/13 4:25 PM

38 PERVASIVE computing www.computer.org/pervasive

EdgE of thE Cloud

W e’ve presented impor-tant scenarios for le-veraging the cloudlet to overcome real-time

constraints and resource-intensive tasks of immersive applications. However, a thin-client approach to cyber foraging isn’t sufficient owing to the huge band-width requirements, so offloading at a more fine-grained level is necessary. Our proposed component-based plat-form for cyber foraging not only de-ploys components but also considers application-specific metrics and devel-oper-specified constraints. Our plat-form can optimize application quality and adapt to changing network condi-tions using an implemented immersive

application. Due to the component-based approach, our platform can share components between multiple devices, opening the door to a new type of col-laborative immersive applications.

acknOwledgMenTSTim Verbelen is funded by a PhD grant of the Fund for Scientific Research, Flanders (FWO-V). Part of this work was funded by the Ghent Univer-sity GOA-grant, “Autonomic Networked Multi-media Systems.”

ReFeRenceS 1. T. Langlotz et al., “Online Creation of

Panoramic Augmented Reality Annota-tions on Mobile Phones,” IEEE Perva-sive Computing, vol. 11, no. 2, 2012, pp. 56–63.

2. R. Balan et al., “The Case for Cyber For-aging,” Proc. 10th Workshop on ACM SIGOPS European Workshop, ACM, 2002, pp. 87–92.

3. R. Buyya et al., “Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility,” Future Generation Computer Systems, vol. 25, no. 6, 2009, pp. 599–616.

4. M. Satyanarayanan et al., “The Case for VM-Based Cloudlets in Mobile Comput-ing,” IEEE Pervasive Computing, vol. 8, no. 4, 2009, pp. 14–23.

5. E. Cuervo et al., “Maui: Making Smart-phones Last Longer with Code Offload,” Proc. 8th Int’l Conf. Mobile Systems, Applications, and Services (MobiSys 10), ACM, 2010, pp. 49–62.

6. B.-G. Chun et al., “Clonecloud: Elastic Execution between Mobile Device and Cloud,” Proc. Sixth Conf. Computer Systems (EuroSys 11), ACM, 2011, pp. 301–314.

7. M. Kristensen, “Scavenger: Transparent Development of Efficient Cyber Forag-ing Applications,” Proc. 2010 IEEE Int’l Conf. Pervasive Computing and Com-munications (PerCom 10), IEEE, 2010, pp. 217–226.

8. T. Verbelen et al., “AIOLOS: Middleware for Improving Mobile Application Perfor-mance through Cyber Foraging,” J. Sys-tems and Software, vol. 85, no. 11, 2012, pp. 2629–2639.

9. G. Klein and D. Murray, “Parallel Track-ing and Mapping for Small AR Work-spaces,” Proc. 6th Int’l Symp. Mixed and Augmented Reality (ISMAR 07), 2007, pp. 1–10.

10. R. Castle and D. Murray, “Keyframe-Based Recognition and Localization During Video-Rate Parallel Tracking and Mapping,” Image and Vision Computing, vol. 29, no. 8, 2011, pp. 524–532.

11. T. Verbelen et al., “Adaptive Application Configuration and Distribution in Mobile Cloudlet Middleware,” Mobile Wireless Middleware, Operating Systems, and Applications, Springer, 2012, pp. 178–192.

the auThORSTim Verbelen is a postdoctoral researcher in the Department of Information Technology (INTEC), Faculty of Engineering, at Ghent University. His main research interests include mobile computing, context-aware computing, and adaptive software. Verbelen received his PhD in computer science from Ghent University. Contact him at [email protected].

Pieter Simoens is a postdoctoral researcher in the Department of Information Technology (INTEC) at Ghent University. His research focuses on smart clients and cloud computing. Simoens received his PhD in computer science from Ghent University. Contact him at [email protected].

filip De Turck is a full-time professor affiliated with the Department of Infor-mation Technology at Ghent University and with the Interdisciplinary Institute of Broadband Technology Flanders (IBBT) in the area of telecommunications and software engineering. His research interests include scalable software ar-chitectures for telecommunications network and service management, cloud computing, performance evaluation, and the design of new telecommunica-tions and eHealth services. De Turck received his PhD in electronic engineering from Ghent University. Contact him at [email protected].

Bart Dhoedt is a professor in the Department of Information Technol-ogy at Ghent University. His research interests include software engineer-ing, distributed systems, mobile and ubiquitous computing, smart clients, middleware, cloud computing, and autonomic systems. Dhoedt received his PhD in electro-technical engineering from Ghent University. Contact him at [email protected].

Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.

pc-12-04-ver.indd 38 30/09/13 4:25 PM