7

Click here to load reader

A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

  • Upload
    ngotu

  • View
    219

  • Download
    6

Embed Size (px)

Citation preview

Page 1: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

A Case for Ending Monolithic Apps for Connected Devices

Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

Amar Phanishayee Aman Kansal Ratul Mahajan

Microsoft Research

1 IntroductionConnected sensing devices, such as cameras, ther-mostats, in-home motion, door-window, energy, wa-ter sensors [1], collectively dubbed as the Internet ofThings (IoT), are rapidly permeating our living environ-ments [2], with an estimated 50 billion such devices inuse by 2020 [8]. They enable a wide variety of applica-tions spanning security, efficiency, healthcare, and oth-ers. Typically, these applications collect data using sens-ing devices to draw inferences about the environment orthe user, and use the inferences to perform certain ac-tions. For example, Nest [10] uses motion sensor data toinfer home occupancy and adjusts the thermostat.

Most applications that use connected devices today arebuilt in a monolithic way. That is, applications are tightlycoupled to the hardware, and need to implement all thedata collection, inferencing, and user functionality re-lated logic. For application developers, this complicatesthe development process, and hinders broad distributionof their applications because the cost of deploying theirspecific hardware limits user adoption. For end users,each sensing device they install is limited to a small set ofapplications, even though the hardware capabilities maybe useful for a broader set of applications. How do webreak free from this monolithic and restrictive setting?Can we enable applications to be programmed to workseamlessly in heterogeneous environments with differenttypes of connected sensors and devices, while leverag-ing devices that may only be available opportunistically,such as smartphones and tablets?

To address this problem, we start from an insight thatmany inferences required by applications can be drawnusing multiple types of connected devices. For instance,home occupancy can be inferred using motion sensors(such as those in security systems or Nest [10]), cameras(e.g. Dropcam [3], Simplicam [14]), microphone, smart-phone GPS, or using a combination of these, since eachmay have different sources of errors. We posit that in-ference logic, traditionally left up to applications, oughtto be abstracted out as a system service. Such a serviceshould relieve application developers of the burden of

implementing and training commonly used inferences.It should enable applications to work using any of thesensing devices that the shared inference logic supports.

This paper discusses the key challenges in designingthe proposed service. First, the service must decouple i)“what is sensed” from “how it is sensed”, and ii) “what isinferred” from “how it is inferred”. In doing so, it shouldensure inference extensibility, and provide a uniform in-terface to inferences which hides sensor-specific intrica-cies (e.g., camera frame rate, motion sensitivity level).Second, the service should enable seamless use of sen-sors on mobile devices by handling environmental dy-namics resulting from device and user mobility. Third,while serving required inferences to applications, the ser-vice should ensure efficient use of resources, e.g., by se-lecting the optimal subset of sensors to serve active ap-plications, sharing overlapping inference computations,and hosting computations on suitable devices.

To explore these challenges concretely, we proposeBeam, an application framework and runtime for in-ference driven applications. Beam provides applicationswith inference-based abstractions. Applications simplyspecify their inference requirements, while Beam bearsthe onus of identifying the required sensors, initiatingdata processing on suitable devices, handling environ-mental dynamics, and using resources efficiently.

2 Key Design RequirementsOur motivation for designing Beam are data-driven-inference based applications, aimed at homes [10, 16],individual users [9, 11, 41, 48, 50] and enterprises [6, 13,20, 33, 42]. We identify three key challenges for Beamby analyzing two popular application classes in detail,one that infers environmental attributes and another thatsenses an individual user.

Rules: A large class of popular applications is based onthe ‘If This Then That (IFTTT)’ pattern [7, 47]. IFTTTenables users to create their own rules connecting sensedattributes to desired actions. We consider a particularrules application which alerts a user if a high risk ap-pliance, e.g., electric oven, is left on when the home isunoccupied [15, 44]. This application uses the appliance-

1

Page 2: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

70

75

80

85

90

95

100

Occupancy Activity

Aver

age

Infe

rence

Acc

ura

cy (

%)

Set 1Set 2

Set 1 ∪ Set 2

Figure 1: Improvement in occupancy and activity in-ference accuracy by combining multiple devices. Foroccupancy, sensor set 1 = {camera, microphone} inone room and set 2 ={PC interactivity detection} ina second room. For physical activity, set 1 = {phoneaccelerometer} and set 2 = {wrist worn FitBit [4]}.

state and home occupancy inferences.Quantified Self (QS) [9, 11, 19, 25, 38, 46] which dis-

aggregates a user’s daily routine, by tracking her physicalactivity (walking, running, etc), social interactions (lone-liness), mood (bored, focused), computer use, and more.

In analyzing these two popular classes of applications,we identify the following three key design requirementsfor an inference framework:

R1) Decouple applications, inference algorithms, anddevices. Data driven inferences can often be derivedusing data from multiple devices. Combining inputs frommultiple devices, when available, generally leads to im-proved inference accuracy (often overlooked by develop-ers [10]). In Figure 1 we illustrate the improvement in in-ference accuracy for the occupancy and physical activityinferences, used in the Rules and Quantified Self appli-cations respectively; 100% accuracy maps to manuallylogged ground truth over 28 hours.

Hence, applications should not be restricted to usinga single sensor, or a single inference algorithm. At thesame time, applications should not be required to in-corporate device discovery, handle the challenges of po-tentially using devices over the wide area (e.g., remoteI/O and tolerating disconnections), use disparate deviceAPIs, and instantiate and combine multiple inferencesdepending on available devices. Therefore an inferenceframework should require an application to only spec-ify the desired inference, e.g., occupancy (in addition toinference parameters like sampling rate and coverage),while handling the complexity of configuring the rightdevices and inference algorithms.

R2) Handle environmental dynamics. Applicationsare often interested in tracking user and device mobil-ity, and in adapting their processing accordingly. Forinstance, the QS applications needs to track which lo-cations a user frequents (e.g., home, office, car, gym,meeting room, etc), handle intermittent connectivity, and

Category R1 R2 R3Device [24] [5] ◦abstractions [12] [18]

Mobile sensing[23] [40] •[34] [29] [30][31] [32] ◦ ◦

Cross-device [49] [45] •Macro-programming [22] [26] [35] ◦

Table 1: Categorization of prior work. ◦ denotes par-tial fulfillment.

more. Application development stands to be greatly sim-plified if the framework were to seamlessly handle suchenvironmental dynamics, e.g., automatically update theselection of devices used for drawing inferences based onuser location. Hence the QS application, potentially run-ning on a cloud server, could simply subscribe to the ac-tivity inference, which would be dynamically composedof sub-inferences from various devices tracking a user.

R3) Optimize resource usage. Applications often in-volve continuous sensing and inferring, and hence con-sume varying amounts of system resources across multi-ple devices over time. Such an application must structureits sensing and inference processing across multiple de-vices, in keeping with the devices’ resource constraints.This adds undue burden on developers. For instance, inthe QS application, wide area bandwidth constraints mayprevent backhaulling of high rate sensor data. Moreover,whenever possible, inferences should be shared acrossmultiple applications to prevent redundant resource con-sumption. Therefore an inference framework must notonly facilitate sharing of inferences, but in doing so mustoptimize for efficient resource use (e.g., network, battery,CPU, memory, etc) while meeting resource constraints.

3 Related WorkBeam is the first framework that (i) decouples appli-cations, inference algorithms, and devices, (ii) handlesenvironmental dynamics, and (iii) automatically splitssensing and inference logic across devices while opti-mizing resource usage. We classify prior work into fourcategories, and compare them based on the requirementsabove (Table 1).

Device abstraction frameworks: HomeOS [24] andother platforms [5, 12], provide homogeneous program-ming abstractions to communicate with devices. For in-stance, HomeOS applications can use a generic motionsensor role, regardless of the sensor’s vendor and proto-col. Similarly, Rio [18] provides smartphone applicationswith a uniform device API, regardless of the devicesbeing local or remote. These approaches only decoupledevice-specific logic from applications, but are unable todecouple inference algorithms from applications. More-over, they do not address handling of environmental dy-

2

Page 3: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

PC Activity

PC Event Adapter

Fitness Activity

Fitbit Adapter

Quantified Self App

Mic Adapter

Social Interaction

Camera Adapter

FitBit Activity

Acc./GPS Adapter

Phone Activity

PC Activity

PC Event Adapter

Mic

Adapter

Social Interaction

Camera Adapter

Home PC Work PCSmartphone GPS,

AccelerometerHome

CameraHome

MicWork

CameraWorkMic

Fitbit

Figure 2: Inference graph for Quantified Self app.

namics and optimizing for efficient resource usage.Mobile sensing frameworks: Existing work has focusedon applications requiring continuous sensing on mobiledevices. Kobe [23] and Auditeur [40] propose buildinglibraries of inference algorithms to promote code re-useand to enable developers to select inference algorithms.Other work [29, 30, 31, 32, 34] has focused on improv-ing resource utilization by sharing sensing and data pro-cessing across multiple applications on a mobile device.Senergy [32] automates selection of inference algorithmsdriven by an application requirements on a mobile de-vice. These approaches overlook handling environmen-tal dynamics across devices, and do not address optimiz-ing resource use for inferences across multiple devices.Moreover, they require manual composition of certain in-ference parameters (e.g., coverage), thus providing lim-ited decoupling of inference algorithms.Cross-device frameworks: Semantic Streams [49] andTask Cruncher [45] address sharing sensor data anddata processing across devices, but are limited to sim-ple stream processing methods, e.g., aggregates, ratherthan sophisticated inferences. They overlook decouplingof sensing and inferring, as well as handling of dynamics.Macro-programming frameworks: Macro-programming frameworks [22, 26, 35] provide ab-stractions to allow applications to dynamically composedataflows [36, 39]. However these approaches focuson data streaming and aggregations rather than genericinferences, and do not target general purpose computedevices e.g., phones, PCs. In addition, they do not ad-dress handling of environmental dynamics and resourceoptimization across devices at runtime.

4 Beam Inference FrameworkWe propose Beam as a framework for distributed appli-cations using connected devices. Applications in Beamsubscribe to high-level inferences. Beam dynamicallycomposes the required modules to satisfy application re-quests by using appropriate devices in the given deploy-ment. Central to Beam’s design are a set of abstractionsthat we describe next.Inference Graphs: Inference graphs are directed acyclicgraphs that connect devices to applications. The nodes

in this graph correspond to inference modules and edgescorrespond to channels that facilitate the transmission ofinference data units (IDUs) between modules. Figure 2shows an example inference graph for a Quantified Selfapplication that uses eight different devices spread acrossthe user’s home and workplace, and includes mobile andwearable devices.

Composing an inference as a directed graph enablessharing of data processing modules across applicationsand across modules that require the same input. In Beam,each compute device associated with a user, such as atablet, phone, PC, or home hub, has a part of the runtime,called the Engine. Engines host inference graphs and in-terface with other engines. Figure 3 shows two engines,one on the user’s home hub and another on her phone;the inference graph for QS shown earlier is split acrossthese engines, with the QS application itself running ona cloud server. For simplicity, we do not show anotherengine that may run on the user’s work PC.

IDU: An Inference data unit (IDU) is a typed inference,and in its general form is a tuple <t,e,s>, which denotesany inference with state information s, generated by aninference algorithm at time t and error e. The types of theinference state s, and error e, are specific to the inferenceat hand. For instance, s may be of a numerical type suchas a double (e.g., inferred energy consumption), or anenumerated type such as, high, medium, low, or numer-ical types. Similarly, error e may specify a confidencemeasure (e.g., standard deviation), probability distribu-tion, or error margin (e.g., radius). IDUs abstract away“what is inferred” from “how it is inferred”. The latter ishandled by inference modules, which we describe next.

Inference Modules: Beam encapsulates inference algo-rithms into typed modules. Inference modules consumeIDUs from one or more modules, perform certain com-putation using IDU data and pertinent in-memory state,and output IDUs. Special modules called adapters inter-face with underlying sensors and output sensor data asIDUs. Adapters decouple “what is sensed” from “howit is sensed”. Module developers specify the IDU typesa module consumes, the IDU type it generates, and themodule’s input dependency (e.g., {PIR} OR {cameraAND mic}). Modules have complete autonomy over howand when to output an IDU, and can maintain arbitraryinternal state. For instance, an occupancy inference mod-ule may specify (i) input IDUs from microphone, cam-era, and motion sensor adapters, (ii) allow multiple mi-crophones as input, and (iii) maintain internal state tomodel ambient noise.

Channels: To ease inference composition, channels linkmodules to each other and to applications. They abstractaway the complexities of connecting modules across dif-ferent devices, including dealing with device disconnec-

3

Page 4: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

Inference Engine 1(Home Hub)

Inference Engine 2(Phone)

Coordinator(Cloud Server)

Engine, Inference Graph, and Coverage Managers

Remote Channel Manger

Inference Graph Optimizers

Clock Sync/Skew Manager

Persistent WebSocket Connection

Persistent WebSocket Connection

Subgraph Manager Engine Profiler Coverage Tracking Service

Subgraph Manager Engine Profiler Coverage Tracking Service

PC Activity

PC Event Adapter

Mic

Adapter

Social Interaction

Camera Adapter

Home PCHome

CameraHome

Mic

Quantified Self App

Fitness Activity

Fitbit Adapter

FitBit Activity

Acc./GPS Adapter

Phone Activity

Smartphone GPSFitbit

Figure 3: An overview of different components in an example Beam deployment with 2 Engines.

tions, and allowing for optimizations such as batchingIDU transfers for efficiency. Every channel has a singlewriter and a single reader module. Modules have canhave multiple input and output channels. Channels con-necting modules on the same engine are local. Channelsconnecting modules on two different engines, across alocal or wide area network, are remote channels. Theyenable applications and inference modules to seamlesslyuse remote devices or remote inference modules.

Coverage tags: Coverage tags help manage sensor cov-erage. Each adapter is associated with a set of coveragetags which describes what the sensor is sensing. For ex-ample, a location string tag can indicate a coverage areasuch as “home” and a remote monitoring application canuse this tag to request an occupancy inference for thiscoverage area. Coverage tags are strongly typed. Beamuses tag types only to differentiate tags and does notdictate tag semantics. This allows applications completeflexibility in defining new tag types. Adapter are assignedtags by the respective engines at setup time, and are up-dated at runtime to handle dynamics.

Beam’s runtime also consists of a Coordinator whichinterfaces with all engines in a deployment and runs ona server that is reachable from all engines. The coordi-nator maintains remote channel buffers to support readeror writer disconnections (typical for mobile devices). Italso provides a place to reliably store state of inferencegraphs at runtime while being resistant to engine crashesand disconnections. The coordinator is also used to main-tain reference time across all engines. Engines interfacewith the coordinator using a persistent web-socket con-nection, and instantiate and manage the parts of an infer-ence graph(s) local to them.

4.1 Beam RuntimeThe Beam runtime creates or updates inference graphswhen applications request inferences (R1), and mutatesthe inference graphs to handle environmental dynamics(R2) and to optimizes resource usage (R3).

Inference graph creation: An application may run onany user device and the sensors required for a requestedinference may be spread across devices. Applications re-quest their local Beam engine for all inferences they re-quire. All application requests are forwarded to the coor-dinator, which uses the requested inference to lookup therequired module. It recursively resolves all required in-puts of that module (as per its specification), and re-usesmatching modules that are already running. The coordi-nator maintains a set of such inference graphs as an in-carnation. The coordinator determines where each mod-ule in the inference graph should run and formulates thenew incarnation. The coordinator initializes buffers forremote channels, and partitions the inference graphs intoengine-specific subgraphs which are sent to the engines.

Engines receive their respective subgraphs, compareeach received subgraph to existing ones, and update themby terminating deleted channels and modules, initializ-ing new ones, and changing channel delivery modes andmodule sampling rates as needed. Engines ensure thatexactly one inference module of each type with a givencoverage tag is created.

Inference delivery and guarantees: For each inferencerequest, Beam returns a channel to the application. Theinference request consists of i) required inference typeor module, ii) delivery mode, iii) coverage tags, and iv)sampling requirements (optional).

Delivery mode is a channel property which capturesdata transport optimizations. For instance, in the freshpush mode, an IDU is delivered as soon as the writer-module generates it, while in the lazy push mode, thereader chooses to receive IDUs in batches (thus amortiz-ing network transfer costs from battery-limited devices).Remote channels provide IDU delivery in the face of de-vice disconnections by using buffers at the coordinatorand the writer engine. Channel readers are guaranteed i)no duplicate IDU delivery, and ii) FIFO delivery basedon IDU timestamps. Currently, remote channels use thedrop-tail policy to minimize wide-area data transfers in

4

Page 5: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

the event of a disconnected/lazy reader. This means whena reader re-connects after a long disconnection, it firstreceives old inference values followed by more recentones. A drop-head policy may be adopted to circumventthis, at the cost of increased data transfers.

In requesting inferences, applications use tags to spec-ify coverage requirements. Furthermore, an applicationmay specify sampling requirements as a latency valuethat it can tolerate in detecting the change of state for aninference (e.g., walking periods of more than 1 minute).This allows adapters and modules to temporarily haltsensing and data processing to reduce battery, network,CPU, or other resources.Optimizing resource use: The Beam coordinator usesinference graphs as the basis for optimizing resource us-age. The coordinator re-configures inference graphs byremapping the engine on which each inference moduleruns. Optimizations are either performed reactively i.e.,when an application issues/cancels an inference request,or proactively at periodic intervals. Beam’s default reac-tive optimization minimizes the number of remote chan-nels and proactive optimization minimizes the amountof data transferred over remote channels. Other potentialoptimizations can minimize battery, CPU, and/or mem-ory consumption at engines.

When handling an inference request, the coordina-tor first incorporates the requested inference graph intothe incarnation, re-using already running modules, andmerging inference graphs if needed. For new modules,the coordinator decides on which engines they should run(by minimizing the number of remote channels).

Engines profile their subgraphs, and report profilingdata (e.g., per-channel data rate) to the coordinator pe-riodically. The coordinator annotates the incarnation us-ing this data and periodically re-evaluates the mapping ofinference modules to engines. Beam’s default proactiveoptimization minimizes wide area data transfers.Handling dynamics: Movement of users and devicescan change the set of sensors that satisfy an applicationrequirements. For instance, consider an application thatrequires camera input from the device currently facingthe user at any time, such as the camera on her home PC,office PC, smartphone, etc. In such scenarios, the infer-ence graph needs to be updated dynamically. Beam up-dates the coverage tags to handle such dynamics. Certaintags such as those of location type (e.g., “home”) can beassumed to be static (edited only by the user), while forcertain other types, e.g, user, the sensed subject is mobileand hence the sensors that cover it may change.

The coordinator’s tracking service manages the cover-age tags associated with adapters on various engines. Theengine’s tracking service updates the user coverage tagsas the user moves. For example, when the user leavesher office and arrives at home, the tracking service re-

moves the user tag from device adapters in the office, andadds them to adapters of devices deployed in the home.The tracking service relies on device interactions to trackusers. When a user interacts with a device it updates thetags of all sensors on the device to include the user’s tag.

When coverage tags change (e.g., due to user move-ment and change in sensor coverage), the coordinator re-computes the inference graphs and sends updated sub-graphs to the affected engines.

4.2 ImplementationOur Beam prototype is implemented in C# as a cross-platform portable service that can be used by .NET v4.5,Windows Store 8.1, and Windows Phone 8.1 applica-tions. Module binaries are currently wrapped within theservice, but may also be downloaded from the coordina-tor on demand. We have implemented 8 inference mod-ules: mic-occupancy [27], camera-occupancy, appliance-use [21, 28], occupancy, pc-activity [37], fitness-activity [43], semantic-location, social-interaction, and9 adapters: tablet and pc mic, power-meter, fitbit [4],GPS, accelerometer, pc-interaction, pc-event, and aHomeOS [24] adapter to access all its device drivers.We have implemented the two sample applications, de-scribed in Sec 2. Both applications run on a cloud VM;Beam hosts the modules for their inferences across theuser’s home PC, work PC, and phone.

Compared to monolithic or library based approaches,we find that for these applications using Beam’s frame-work results in up to 4.5× lower number of tasks and12× lower SLoC, up to 3× higher inference accuracydue to Beam’s handling of environmental dynamics, andBeam’s dynamic optimizations match hand optimizedversions for network resource usage. We aim to enrichBeam’s optimizer to include optimizations for battery,CPU, and memory, and plan to extend tracking support toobjects using passive tags, e.g., RFID, or QR codes [17].

5 ConclusionApplications using connected devices are difficult to de-velop today because they are constructed as monolithicsilos, tightly coupled to sensing devices, and must imple-ment all data sensing and inference logic, even as devicesmove or are temporarily disconnected. We present Beam,a framework and runtime for distributed inference-drivenapplications that (i) decouples applications, inference al-gorithms, and devices, (ii) handles environmental dy-namics, and (iii) automatically splits sensing and infer-ence logic across devices while optimizing resource us-age. Using Beam, applications only specify “what shouldbe sensed or inferred,” without worrying about “how it issensed or inferred.” Beam simplifies application develop-ment and maximizes the utility of user-owned devices. Itis time to end monolithic apps for connected devices.

5

Page 6: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

References[1] Amazon Home Automation Store.

http://www.amazon.com/home-automation-smarthome/b?ie=UTF8&node=6563140011.

[2] The US Market for Home Automation and Secu-rity Technologies. Technical report, BCC Research,IAS031B, 2011.

[3] Dropcam - Super Simple Video Monitoring and Se-curity. https://www.dropcam.com/.

[4] Fitbit. https://www.fitbit.com/.[5] HomeSeer. http://homeseer.com/.[6] iOS: Understanding iBeacon. http:

//support.apple.com/kb/HT6048.[7] IFTTT: Put the internet to work for you. https:

//ifttt.com/.[8] The Internet of Things. http://share.

cisco.com/internet-of-things.html/.

[9] Map my fitness. http://www.mapmyfitness.com/.

[10] Nest. http://www.nest.com/.[11] Quantified self. http://quantifiedself.

com/.[12] Revolv. http://revolv.com/.[13] ShopKick shopping app. http://shopkick.

com/.[14] Simplicam Home Surveillance Camera. https:

//www.simplicam.com/.[15] Smart Power Strip, . http://www.rogeryiu.

com/.[16] SmartThings, . http://www.smartthings.

com/.[17] https://www.thetileapp.com/.[18] A. Amiri Sani, K. Boos, M. H. Yun, and L. Zhong.

Rio: A System Solution for Sharing I/O betweenMobile Systems. In Proc. ACM MobiSys, 2014.

[19] E. Arroyo, L. Bonanni, and T. Selker. Waterbot: Ex-ploring feedback and persuasive techniques at thesink. In Proc. ACM CHI 2005.

[20] R. K. Balan, K. X. Nguyen, and L. Jiang. Real-time trip information service for a large taxi fleet.In Proc. 9th ACM MobiSys, June 2011.

[21] N. Batra, J. Kelly, O. Parson, H. Dutta, W. J. Knot-tenbelt, A. Rogers, A. Singh, and M. Srivastava.Nilmtk: an open source toolkit for non-intrusiveload monitoring. In Proc. ACM e-Energy, 2014.

[22] A. Boulis, C.-C. Han, R. Shea, and M. B. Srivas-tava. Sensorware: Programming sensor networksbeyond code update and querying. Pervasive Mob.

Comput., 2007.[23] D. Chu, N. D. Lane, T. T.-T. Lai, C. Pang, X. Meng,

Q. Guo, F. Li, and F. Zhao. Balancing energy, la-tency and accuracy for mobile sensor data classifi-cation. In Proc. 9th ACM SenSys, Nov. 2011.

[24] C. Dixon, R. Mahajan, S. Agarwal, A. J. Brush,B. Lee, S. Saroiu, and P. Bahl. An operating sys-tem for the home. In Proc. 9th USENIX NSDI, Apr.2012.

[25] J. Froehlich, L. Findlater, M. Ostergren, S. Ra-manathan, J. Peterson, I. Wragg, E. Larson, F. Fu,M. Bai, S. Patel, and J. A. Landay. The design andevaluation of prototype eco-feedback displays forfixture-level water usage data. In ACM CHI 2012.

[26] R. Gummadi, O. Gnawali, and R. Govindan.Macro-programming wireless sensor networks us-ing kairos. In Distributed Computing in Sensor Sys-tems. 2005.

[27] T. Hao, G. Xing, and G. Zhou. isleep: Unobtru-sive sleep quality monitoring using smartphones. InProc 11th ACM SenSys, 2013.

[28] G. Hart. Nonintrusive appliance load monitoring.Proceedings of the IEEE, 1992.

[29] Y. Ju, Y. Lee, J. Yu, C. Min, I. Shin, and J. Song.Symphoney: A coordinated sensing flow executionengine for concurrent mobile sensing applications.In Proc. 10th ACM SenSys, Nov. 2012.

[30] S. Kang, J. Lee, H. Jang, H. Lee, Y. Lee, S. Park,T. Park, and J. Song. Seemon: Scalable and energy-efficient context monitoring framework for sensor-rich mobile environments. In Proc. ACM MobiSys,2008.

[31] S. Kang, Y. Lee, C. Min, Y. Ju, T. Park, J. Lee,Y. Rhee, and J. Song. Orchestrator: An active re-source orchestration framework for mobile contextmonitoring in sensor-rich mobile environments. InIEEE PerCom, 2010.

[32] A. Kansal, S. Saponas, A. B. Brush, K. S. McKin-ley, T. Mytkowicz, and R. Ziola. The latency, ac-curacy, and battery (lab) abstraction: Programmerproductivity and energy efficiency for continuousmobile context sensing. In Proc. ACM OOPSLA,Nov. 2013.

[33] L. Krishnamurthy, R. Adler, P. Buonadonna,J. Chhabra, M. Flanigan, N. Kushalnagar, L. Nach-man, and M. Yarvis. Design and deployment of in-dustrial sensor networks: Experiences from a semi-conductor plant and the north sea. In Proc. 3rdACM SenSys, Nov. 2005.

[34] F. Lai, S. S. Hasan, A. Laugesen, and O. Chipara.Csense: A stream-processing toolkit for robust andhigh-rate mobile sensing applications. In Proc. 13th

6

Page 7: A Case for Ending Monolithic Apps for Connected Devices · A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA)

IPSN, 2014.[35] L. Luo, T. F. Abdelzaher, T. He, and J. A.

Stankovic. Envirosuite: An environmentally im-mersive programming framework for sensor net-works. ACM Transactions on Embedded Comput-ing Systems (TECS), 2006.

[36] G. Mainland, M. Welsh, and G. Morrisett. Flask: Alanguage for data-driven sensor network programs.Harvard Univ., Tech. Rep. TR-13-06, 2006.

[37] G. Mark, S. T. Iqbal, M. Czerwinski, and P. Johns.Bored mondays and focused afternoons: Therhythm of attention and online activity in the work-place. In Proc. of the 32nd ACM CHI, April 2014.

[38] D. Morris, A. B. Brush, and B. R. Meyers. Su-perbreak: Using interactivity to enhance ergonomictyping breaks. In Proc. ACM CHI, 2008.

[39] R. Newton, G. Morrisett, and M. Welsh. The reg-iment macroprogramming system. In Proc. of the6th ACM/IEEE IPSN, 2007.

[40] S. Nirjon, R. F. Dickerson, P. Asare, Q. Li, D. Hong,J. A. Stankovic, P. Hu, G. Shen, and X. Jiang. Audi-teur: A mobile-cloud service platform for acousticevent detection on smartphones. In Proc. 11th ACMMobiSys, June 2013.

[41] M. M. Rahman, A. A. Ali, K. Plarre, M. al’Absi,E. Ertin, and S. Kumar. mConverse: Inferring con-versation episodes from respiratory measurementscollected in the field. In Proceedings of the 2ndConference on Wireless Health, 2011.

[42] H. Ramamurthy, B. S. Prabhu, R. Gadh, andA. Madni. Wireless industrial monitoring and con-trol using a smart sensor platform. Sensors Journal,

IEEE, 2007.[43] S. Reddy, M. Mun, J. Burke, D. Estrin, M. Hansen,

and M. Srivastava. Using mobile phones to deter-mine transportation modes. ACM Transactions onSensor Networks (TOSN), 2010.

[44] R. P. Singh, S. Keshav, and T. Brecht. A cloud-based consumer-centric architecture for energy dataanalytics. In e-Energy, 2013.

[45] A. Tavakoli, A. Kansal, and S. Nath. On-line sens-ing task optimization for shared sensors. In Proc.9th ACM/IEEE IPSN, 2010.

[46] T. Toscos, A. Faber, S. An, and M. P. Gandhi. Chickclique: Persuasive technology to motivate teenagegirls to exercise. In ACM CHI EA 2006.

[47] B. Ur, E. McManus, M. Pak Yong Ho, and M. L.Littman. Practical trigger-action programming inthe smart home. In Proc. ACM CHI, 2014.

[48] R. Wang, F. Chen, Z. Chen, T. Li, G. Harari, S. Tig-nor, X. Zhou, D. Ben-Zeev, and A. T. Campbell.Studentlife: Assessing behavioral trends, mentalwell-being and academic performance of collegestudents using smartphones. In Proc. ACM Ubi-Comp, 2014.

[49] K. Whitehouse, F. Zhao, and J. Liu. Semanticstreams: A framework for composable semantic in-terpretation of sensor data. In In Proceedings of theEuropean Workshop on Wireless Sensor Networks,2006.

[50] D. Wyatt, T. Choudhury, J. Bilmes, and J. A.Kitts. Inferring colocation and conversation net-works from privacy-sensitive audio with implica-tions for computational social science. ACM Trans.Intell. Syst. Technol., 2011.

7