60
DECEMBER 2019 www.computer.org > Visualization > Robotics > Autonomous Vehicles > Edge Computing

> Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

DECEMBER 2019 www.computer.org

> Visualization> Robotics> Autonomous

Vehicles> Edge Computing

Page 2: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

COMPSAC is the IEEE Computer Society Signature Conference on Computers, Software and Applications. It is a major international forum for academia, industry, and government to discuss research results and advancements, emerging challenges, and future trends in computer and software technologies and applications. The theme of COMPSAC 2020 is “Driving Intelligent Transformation of the Digital World”.

Staying relevant in a constantly evolving digital landscape is a challenge faced by researchers, developers, and producers in virtually every industry and area of study. Once limited to software-enabled devices, the ubiquity of digitally-enabled systems makes this challenge a universal issue. Furthermore, as relevance fuels change, many infl uencers will off er solutions that benefi t their own priorities. Fortunately, history has shown that the building blocks of digital change are forged by those conducting foundational research and development of digital systems and human interactions. Artifi cial Intelligence is not new, but is much more utilized in everyday computing now that data and processing resources are more economically viable, hence widely available. The opportunity to drive the use of this powerful tool in transforming the digital world is yours. Will your results help defi ne the path ahead, or will you relegate those decisions to those with diff erent priorities for utilizing intelligence in digital systems? COMPSAC has been and continues to be a highly respected venue for the dissemination of key research on computer and software systems and applications, and has infl uenced fundamental developments in these fi elds for over 40 years. COMPSAC 2020 is your opportunity to add your mark to this ongoing journey, and we highly encourage your submission!

COMPSAC 2020, organized as a tightly integrated union of symposia, will focus on technical aspects of issues relevant to intelligent transformation of the digital world. The technical program will include keynote addresses, research papers, industrial case studies, fast abstracts, a doctoral symposium, poster sessions, and workshops and tutorials on emerging and important topics related to the conference theme. Highlights of the conference will include plenary and specialized panels that will address the technical challenges facing researchers and practitioners who are driving fundamental changes in intelligent systems and applications. Panels will also address cultural and societal challenges for a society whose members must continue to learn to live, work, and play in the environments the technologies produce.

Authors are invited to submit original, unpublished research work, as well as industrial practice reports. Simultaneous submission to other publication venues is not permitted except as highlighted in the COMPSAC 2020 J1C2 & C1J2 program. All submissions must adhere to IEEE Publishing Policies, and will be vetted through the IEEE CrossCheck portal. Further info is available at www.compsac.org.

OrganizersStanding Committee Chair: Sorel Reisman (California State University, USA)Steering Committee Chair: Sheikh Iqbal Ahamed (Marquette University, USA)General Chairs: Mohammad Zulkernine (Queen’s University, Canada), Edmundo Tovar (Universidad Politécnica de Madrid, Spain), Hironori Kasahara (Waseda University, Japan)Program Chairs in Chief: W. K. Chan (City University, Hong Kong), Bill Claycomb (Carnegie Mellon University, USA), Hiroki Takakura (National Institute of Informatics, Japan)Workshop Chairs: Ji-Jiang Yang (Tsinghua University, USA), Yuuichi Teranishi (National Institute of Information and Communications Technology, Japan), Dave Towey (University of Nottingham Ningbo China, China), Sergio Segura (University of Seville, Spain)Local Chairs: Sergio Martin (UNED, Spain), Manuel Castro (UNED, Spain)

Important DatesWorkshops proposals due: 15 November 2019 Workshops acceptance notifi cation: 15 December 2019Main conference papers due: 20 January 2020 Paper notifi cation: 3 April 2020 Workshop papers due: 9 April 2020Workshop paper notifi cations: 1 May 2020Camera-ready and registration due: 15 May 2020

COMPSAC 2020Madrid, SpainJuly 13-17, 2020

Photo: King Felipe III in Major Square, MadridPhoto credit: Iria Castro - Photographer (Instagram @iriacastrophoto)

Page 3: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

STAFF

EditorCathy Martin

Publications Operations Project SpecialistChristine Anthony

Production & DesignCarmen Flores-Garvey

Publications Portfolio ManagersCarrie Clark, Kimberly Sperka

PublisherRobin Baldwin

Senior Advertising CoordinatorDebbie Sims

Circulation: ComputingEdge (ISSN 2469-7087) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 10016-5997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters, 2001 L Street NW, Suite 700, Washington, DC 20036.

Postmaster: Send address changes to ComputingEdge-IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Printed in USA.

Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in ComputingEdge does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.

Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use: 1) is not made for profit; 2) includes this notice and a full citation to the original work on the first page of the copy; and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first screen of the posted copy. An accepted manuscript is a version which has been revised by the author to incorporate review suggestions, but not the published version with copy-editing, proofreading, and formatting added by IEEE. For more information, please go to: http://www.ieee.org/publications_standards/publications/rights/paperversionpolicy.html. Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or [email protected]. Copyright © 2019 IEEE. All rights reserved.

Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923.

Unsubscribe: If you no longer wish to receive this ComputingEdge mailing, please email IEEE Computer Society Customer Service at [email protected] and type “unsubscribe ComputingEdge” in your subject line.

IEEE prohibits discrimination, harassment, and bullying. For more information, visit www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

IEEE COMPUTER SOCIETY computer.org • +1 714 821 8380

www.computer.org/computingedge 1

IEEE Computer Society Magazine Editors in Chief

ComputerDavid Alan Grier (Interim), Djaghe LLC

IEEE SoftwareIpek Ozkaya, Software Engineering Institute

IEEE Internet ComputingGeorge Pallis, University of Cyprus

IT ProfessionalIrena Bojanova, NIST

IEEE Security & PrivacyDavid Nicol, University of Illinois at Urbana-Champaign

IEEE MicroLizy Kurian John, University of Texas, Austin

IEEE Computer Graphics and ApplicationsTorsten Möller, University of Vienna

IEEE Pervasive ComputingMarc Langheinrich, University of Lugano

Computing in Science & EngineeringJim X. Chen, George Mason University

IEEE Intelligent SystemsV.S. Subrahmanian, Dartmouth College

IEEE MultiMediaShu-Ching Chen, Florida International University

IEEE Annals of the History of ComputingGerardo Con Diaz, University of California, Davis

Page 4: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

DECEMBER 2019 • VOLUME 5, NUMBER 12

THEME HERE

8Exploranation:

A New Science Communication

Paradigm

16Advances in

Visualization Recommender

Systems

34Vehicle

Telematics: The Good, Bad,

and Ugly

Page 5: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

Subscribe to ComputingEdge for free at www.computer.org/computingedge.

Visualization

8 Exploranation: A New Science Communication Paradigm

ANDERS YNNERMAN, JONAS LOWGREN, AND LENA TIBELL

16 Advances in Visualization Recommender Systems KLAUS MUELLER

Robotics

18 Privacy and Civilian Drone Use: The Need for Further Regulation

STEPHANIE WINKLER, SHERALI ZEADALLY, AND KATRINE EVANS

27 The Robotization of Extreme Automation: The Balance between Fear and Courage

JINAN FIAIDHI, SABAH MOHAMMED, AND SAMI MOHAMMED

Autonomous Vehicles

34 Vehicle Telematics: The Good, Bad, and Ugly HAL BERGHEL

39 Can We Trust Smart Cameras? BERNHARD RINNER

Edge Computing

43 A Disaggregated Cloud Architecture for Edge Computing

RAFAEL MORENO-VOZMEDIANO, EDUARDO HUEDO, RUBEN S. MONTERO, AND IGNACIO M. LLORENTE

49 Oscillation VINTON G. CERF

Departments

4 Magazine Roundup 7 Editor’s Note: Engaging and Effective Visualizations 72 Conference Calendar39

Can We Trust Smart Cameras?

Page 6: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

4 December 2019 Published by the IEEE Computer Society 2469-7087/19 © 2019 IEEE

CS FOCUS

T he IEEE Computer Society’s lineup of 12 peer-reviewed tech-

nical magazines covers cut-ting-edge topics ranging from software design and computer graphics to Internet comput-ing and security, from scien-tifi c applications and machine intelligence to visualization and microchip design. Here are high-lights from recent issues.

Computer

Creating a Resilient IoT with Edge Computing Denial-of-service (DoS) attacks on the safety-critical Inter-net of Things (IoT) can lead to life-threatening consequences, and the risk of these attacks is

increasing. The authors of this article from the August 2019 issue of Computer propose lev-els of context awareness to address availability threats and illustrate how context-aware edge computing enhances the IoT’s resilience to DoS attacks through our edge computing-based security solution.

Computing in Science & Engineering

Toward a Methodology and Framework for Workfl ow-Driven Team ScienceScientifi c workfl ows are pow-erful tools for the manage-ment of scalable experiments, often composed of complex tasks running on distributed

resources. Existing cyberin-frastructure provides compo-nents that can be utilized within repeatable workfl ows. However, data and computing advances continuously change the way scientifi c workfl ows get devel-oped and executed, pushing the scientifi c activity to be more data-driven, heterogeneous, and collaborative. Workfl ow devel-opment today depends on the eff ective collaboration and com-munication of a cross-disciplin-ary team, not only with humans but also with analytical systems and infrastructure. This article from the July/August 2019 issue of Computing in Science & Engi-neering presents a collaboration-centered reference architecture to extend workfl ow systems with dynamic, predictable, and programmable interfaces to sys-tems and infrastructure while bridging the exploratory and scalable activities in the scien-tifi c process. The authors pres-ent a conceptual design toward the development of methodolo-gies and services for eff ective

Magazine Roundup

Page 7: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 5

workfl ow-driven collaborations, namely the PPoDS methodology for collaborative workfl ow devel-opment and the SmartFlows Ser-vices for smart execution in a rapidly evolving cyberinfrastruc-ture ecosystem.

IEEE Annals of the History of Computing

TeX: A Branch in Desktop Publishing Evolution, Part 2Donald Knuth began the develop-ment of TeX in 1977 and had an initial version running in 1978, with the aim of typesetting math-ematical documents with the highest quality and with archival reproducibility far into the future. Its usage spread, and today the TeX system remains in use by a vibrant user community. How-ever, the world of TeX has always been somewhat out of sync with the commercial desktop publish-ing systems that began to come out in the mid-1980s and are now ubiquitous in the worlds of pub-lishing and printing. Read more in the April–June 2019 issue of IEEE Annals of the History of Computing.

IEEE Computer Graphics and Applications

Flow Field Reduction via Reconstructing Vector Data from 3D Streamlines Using Deep LearningThe authors of this article from the July/August 2019 issue of IEEE Computer Graphics and Applica-tions present a new approach for streamline-based fl ow fi eld representation and reduction.

Their method can work in the in situ visualization setting by trac-ing streamlines from each time step of the simulation and stor-ing compressed streamlines for post hoc analysis where users can aff ord longer reconstruction time for higher reconstruction quality using decompressed streamlines. At the heart of the approach is a deep-learning method for vector fi eld reconstruction that takes the streamlines traced from the original vector fi elds as input and applies a two-stage process to reconstruct high-quality vector fi elds. To dem-onstrate the eff ectiveness of the approach, the authors show quali-tative and quantitative results with several datasets and compare the method against the de facto method of gradient vector fl ow in terms of speed and quality tradeoff .

IEEE Intelligent Systems

Sentiment and Sarcasm Classifi cation with Multitask LearningSentiment classifi cation and sar-casm detection are both important natural language processing tasks. Sentiment is always coupled with sarcasm where intensive emotion is expressed. Nevertheless, most literature considers them as two separate tasks. The authors of this article from the May/June 2019 issue of IEEE Intelligent Systems argue that knowledge in sarcasm detection can also be benefi cial to sentiment classifi cation and vice versa. They show that these two tasks are correlated, and they present a multitask learning-based framework using a deep neural

network that models this correla-tion to improve the performance of both tasks in a multitask learning setting. Their method outperforms the state of the art by 3-4% in the benchmark dataset.

IEEE Internet Computing

Integration of Wireless Sensor Networks and Smart UAVs for Precision ViticulturePrecision viticulture (PV) aims to improve grapevine production effi -ciency, quality, and profi tability, while reducing the environmen-tal impact. The promises of PV are realized only if large areas are monitored with high spatial and temporal resolutions. This article from the May/June 2019 issue of IEEE Internet Computing considers the integration of a wireless sensor network and a smart unmanned aerial vehicle platform. To this end, local variations of factors that infl uence grape yield and quality are measured, and site-specifi c management practices are applied. This approach achieves real-time, uninterrupted monitoring of the vine growth environment, as well as on-demand imaging and high-resolution data collection from any specifi c location, thereby optimiz-ing the production effi ciencies and the application of inputs in a cost-eff ective way.

IEEE Micro

Leveraging Cache Management Hardware for Practical Defense against Cache Timing Channel AttacksSensitive information leakage

Page 8: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

6 ComputingEdge December 2019

MAGAZINE ROUNDUP

through shared hardware struc-tures is becoming a growing secu-rity concern. In this article from the July/August 2019 issue of IEEE Micro, the authors propose a prac-tical protection framework against cache timing channel attacks by leveraging commercial off -the-shelf hardware support in last-level caches for cache monitoring and partitioning.

IEEE MultiMedia

Compact Descriptors for Video Analysis: The Emerging MPEG StandardThis article from the April–June 2019 issue of IEEE MultiMedia provides an overview of the ongo-ing compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG). MPEG-CDVA aims to defi ne a standardized bitstream syntax to enable interoperability in the context of video analysis applications. During the develop-ments of MPEG-CDVA, a series of techniques aiming to reduce the descriptor size and improve the video representation ability have been proposed. This article describes the new standard that is being developed and reports the performance of these key technical contributions.

IEEE Pervasive Computing

Accelerating Conversational Agents Built with Off -the-Shelf Modularized ServicesToday’s common practice in devel-oping conversational agents is

pipelining off -the-shelf modular-ized services as ready-made build-ing blocks. However, the discrete and sequential nature of the mod-ules yields long response latency. This article from the April–June 2019 issue of IEEE Pervasive Computing introduces Sci-Fii, a speculative inference framework accelerating conversational agent systems built with off -the-shelf modules.

IEEE Security & Privacy

Applied Digital Threat Modeling: It WorksThreat modeling, a structured process for identifying risks and developing mitigation strategies, has never been systematically evaluated in a real environment. This article from the July/August 2019 issue of IEEE Security & Pri-vacy provides a case study at the New York City Cyber Command—the primary digital defense organi-zation for the most populous city in the United States—which found tangible benefi ts.

IEEE Software

Automated Testing of Simulation Software in the Aviation Industry: An Experience ReportAn industry-academia collabora-tion developed a test automation framework for aviation simula-tion software. The technology has been successfully deployed in sev-eral test teams. Read more in the July/August 2019 issue of IEEE Software.

IT Professional

Using Blockchain to Rein in the New Post-Truth World and Check the Spread of Fake NewsIn recent years, “fake news” has become a global issue that raises unprecedented challenges for human society and democracy. This problem has arisen due to the emergence of various concomitant phenomena such as 1) the digiti-zation of human life and the ease of disseminating news through social networking applications (such as Facebook and What-sApp); 2) the availability of “big data” that allows customization of news feeds and the creation of polarized so-called “fi lter-bubbles”; and 3) the rapid progress made by generative machine-learning and deep-learning algorithms in creat-ing realistic-looking yet fake digi-tal content (such as text, images, and videos). There is a crucial need to combat the rampant rise of fake news and disinformation. In this article from the July/August 2019 issue of IT Professional, the authors propose a high-level over-view of a blockchain-based frame-work for fake news prevention and highlight the various design issues and considerations of such a blockchain-based framework for tackling fake news.

@s e cur it ypr ivac yFOLLOW US

Page 9: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

2469-7087/19 © 2019 IEEE Published by the IEEE Computer Society December 2019 7

EDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTE

I n myriad industries and fi elds, visualiza-tion is a way to analyze data and dissemi-nate insights. For a visualization to convey

information successfully, it must be clear, user-friendly, and visually appealing. This issue of ComputingEdge shares methods and recommen-dations for creating visualizations that make an impact on audiences.

IEEE Computer Graphics and Applications’ “Exploranation: A New Science Communication Paradigm” provides examples of popular museum installations that make use of data and visualiza-tion software from research (exploratory visualiza-tion) to educate the public about scientifi c topics (explanatory visualization). The authors also pres-ent design principles for compelling interactive visualizations. Computer’s “Advances in Visualiza-tion Recommender Systems” describes a tool that aims to help people decide on the best graphical design for their data and analytical goals.

Another technology that elicits reactions from the public is robotics. The increasing presence of robots in our lives fosters new concerns, with

privacy and job loss being chief among them. In IEEE Security & Privacy’s “Privacy and Civilian Drone Use: The Need for Further Regulation,” the authors advocate for more protection against drone surveillance and intrusion. IT Professional’s “The Robotization of Extreme Automation: The Balance between Fear and Courage” argues that we should embrace the benefi ts that robots bring to manufac-turing while working to mitigate job loss.

Autonomous vehicles also earn their share of concerns. Computer’s “Vehicle Telematics: The Good, Bad, and Ugly” discusses security and pri-vacy vulnerabilities in most high-tech cars. Also from Computer, “Can We Trust Smart Cameras?” questions whether the cameras in self-driving cars are reliable enough for real traffi c scenarios.

Finally, two IEEE Internet Computing articles cover diff erent aspects of edge computing. “A Disaggregated Cloud Architecture for Edge Com-puting” proposes a model that manages both on-premises cloud infrastructure and third-party dispersed edge nodes, and “Oscillation” high-lights the evolution of edge computing and pre-dicts where it will lead.

Engaging and Effective Visualizations

Page 10: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

8 December 2019 Published by the IEEE Computer Society 2469-7087/19 © 2019 IEEE

DEPARTMENT: Education

Exploranation: A New Science Communication Paradigm

Science communication is facing a paradigm shift,

based on the convergence of exploratory and

explanatory visualization. In this article, we coin the

term exploranation to denote the way in which

visualization methods from scientific exploration can

be used to communicate results and how methods in

explanatory visualization can enrich exploration.

The visual language spans borders of knowledge, experi-ence, age, gender, and culture. This arguably makes visual-ization the most effective form of expression in science

communication. Visual science communication is facing a paradigm shift caused by the rapid pace of digitalization and widespread availability of visual data analysis tools and computing re-sources. Until now there has been a clear division between visualization enabling effective data analysis leading to scientific discovery (exploratory visualization) and visual representations used to explain and communicate science to a general audience (explanatory visualization). To denote the confluence of exploration and explanation, we coin the neologism exploranation, as a combination of the two terms.

The underpinning trend behind the paradigm shift is based on the emerging strong synergy be-tween exploratory and explanatory visualization driven by two connected facts:

• Science communication can now be directly based on real, large-scale and/or complex scientific data—i.e., the same data that researchers have gathered and explored for sci-entific discovery.

• Advanced visualization and interaction methodology has reached a high level of ma-turity and can be used on standard GPUs, making it possible to use the same visualiza-tion approaches in both exploratory and explanatory visualization.

The use of visual exploranation approaches in science communication is still in its infancy. At the Norrköping Visualization Center in Sweden, a center with research units and public facilities with 150,000 visitors per year, we have, over the past decade, worked on developing a range of visualization experiences for the general public and in particular for schoolchildren. To do this, we’ve been using various technical platforms and storytelling approaches in interactive data-driven visualization.

Anders Ynnerman Linköping University

Jonas Löwgren Linköping University

Lena Tibell Linköping University

Editors: Beatriz Sousa Santos [email protected]

Ginger Alford alfordg@trinityvalleyschool .org

13IEEE Computer Graphics and Applications Published by the IEEE Computer Society

0272-1716/18/$33.00 USD ©2018 IEEEMay/June 2018

Page 11: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 9

IEEE COMPUTER GRAPHICS AND APPLICATIONS

It is through this work that we have identified the disruptive change in science communication that we are facing, and we have started to systematically approach the challenges involved in bringing interactive data visualization to the public. These challenges are reaching into the foun-dation of not only technical areas such as data access and representations, real-time data analysis, visual representations, and rendering and interaction paradigms, but also research areas such as storytelling in interactive media and design for visual learning. On a fundamental level, there is also a need to develop a theory and understanding of how visual literacy is established in interac-tive exploration. Additionally, we need ways to measure user engagement or involvement and the corresponding effect on learning outcomes in exploranation scenarios.

On a general note, it can be added that over the past decades, visualization has proven itself to be a highly versatile tool, being deployed in a range of applications spanning across academic and commercial domains, enabling scientific breakthroughs, improving industrial workflows, creat-ing new business models, etc. However, one question often asked is why the visualization re-search community has not been more actively involved in the development of some of the most widespread visualization services such as Google Earth, services that reach a large-scale general public. One of the answers to this question may be found in the emphasis visualization research has had on exploratory visualization used by domain experts. It is our hope that the convergence of exploratory and explanatory approaches will lead to the increased and widespread impact of interactive data visualization approaches.

In this article we feature three scenarios, providing examples and showcasing different flavors of how we have worked with exploranation applications. We then conclude by providing a set of empirically verified design recommendations and a summary from the visual-learning perspec-tive.

EXPLORANATION SCENARIOS One of the underpinning strategies in our work to bring interactive data visualization to public spaces is based on the need to produce functioning installations that are robust enough to endure interactions with large numbers of visitors per year. This entails going the extra mile in terms of robustness, reliability, and extracting feedback from user experiences of the software. We have seen this as an enabling factor in gaining insights into the special demands on interactive data visualization that science communication makes.

From Medical Researchers to Visitors at the British Museum Medical-imaging modalities are generating increasingly large amounts of data. In a single full-body scan from a computerized-tomography scanner, approximately 20,000 high-resolution data slices can be generated in just a few seconds. Interactive direct volume rendering (DVR) of this data has become one of the main analysis tools. In our research on DVR, we developed a range of methods to deal with data sizes1 and methods to improve the quality and speed of volumetric lighting.2 We also implemented our software on large-scale multitouch tables.3

All this work was targeting medical-domain experts, clinicians, and medical students. However, we soon realized that the interactive exploration of medical data on touch tables raised a tremen-dous amount of interest among the general public and, in particular, visitors to the Visualization Center. At the Center, we had placed one touch table with sample datasets and a simplified inter-face with presets and limited interaction, together with a narrative evolving with the help of in-formation hotspots. In essence, this insight was the starting point for the concept of exploranation—that the same data and underlying methods can be used for both experts in pro-fessional tasks and lay audiences in public settings. Interacting at a large touch table affords the formation of temporary social structures around objects of interest. Interfaces combining overall narrative structures with scaffolded, self-directed examination facilitate the integration of explan-atory and exploratory modes of inquiry (more on this to follow).

Formal evaluations of our public installations in museums and science centers show that audi-ence engagement is consistently very high. More importantly, the installation of touch tables to

14May/June 2018 www.computer.org/cga

DEPARTMENT: Education

Exploranation: A New Science Communication Paradigm

Science communication is facing a paradigm shift,

based on the convergence of exploratory and

explanatory visualization. In this article, we coin the

term exploranation to denote the way in which

visualization methods from scientific exploration can

be used to communicate results and how methods in

explanatory visualization can enrich exploration.

The visual language spans borders of knowledge, experi-ence, age, gender, and culture. This arguably makes visual-ization the most effective form of expression in science

communication. Visual science communication is facing a paradigm shift caused by the rapid pace of digitalization and widespread availability of visual data analysis tools and computing re-sources. Until now there has been a clear division between visualization enabling effective data analysis leading to scientific discovery (exploratory visualization) and visual representations used to explain and communicate science to a general audience (explanatory visualization). To denote the confluence of exploration and explanation, we coin the neologism exploranation, as a combination of the two terms.

The underpinning trend behind the paradigm shift is based on the emerging strong synergy be-tween exploratory and explanatory visualization driven by two connected facts:

• Science communication can now be directly based on real, large-scale and/or complex scientific data—i.e., the same data that researchers have gathered and explored for sci-entific discovery.

• Advanced visualization and interaction methodology has reached a high level of ma-turity and can be used on standard GPUs, making it possible to use the same visualiza-tion approaches in both exploratory and explanatory visualization.

The use of visual exploranation approaches in science communication is still in its infancy. At the Norrköping Visualization Center in Sweden, a center with research units and public facilities with 150,000 visitors per year, we have, over the past decade, worked on developing a range of visualization experiences for the general public and in particular for schoolchildren. To do this, we’ve been using various technical platforms and storytelling approaches in interactive data-driven visualization.

Anders Ynnerman Linköping University

Jonas Löwgren Linköping University

Lena Tibell Linköping University

Editors: Beatriz Sousa Santos [email protected]

Ginger Alford alfordg@trinityvalleyschool .org

13IEEE Computer Graphics and Applications Published by the IEEE Computer Society

0272-1716/18/$33.00 USD ©2018 IEEEMay/June 2018

Page 12: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

10 ComputingEdge December 2019

EDUCATION

explore the insides of interesting artifacts in museums (for an example, see Figure 1) in fact in-creases the proportion of visitors studying the actual artifacts. For a comprehensive example, re-fer to our work with the Gebelein Man mummy at the British Museum.4

Figure 1. Children at the Mediterranean Museum in Stockholm using our visualization table to interactively explore a data visualization of an ancient Egyptian mummy.

Perceiving the Forces in the Microcosmos For 30 to 40 years, visualization has been one of the most common communication methods in cell- and molecular-biology research to analyze large amounts of data—for example, the coordi-nates of thousands of atoms in large dynamic molecules and the rate constants for their interac-tions. The objects are very small (nm), their shapes do not have a direct resemblance to macroscopic visual metaphors, and the underlying processes are complex. Interactions at the mo-lecular level are of fundamental importance for all aspects of life and involve the participation of many weak forces. These interactions are dynamic, complex, and stochastic in nature but lead to highly regulated processes. In fact, they are highly counterintuitive,5,6 especially when described in text.

To explore the added value of multimodal interaction in visual learning and to enable improved understanding of molecular interactions, we created a haptic application named MolDock (see Figure 2). Molecular coordinates for a protein are imported from the Protein Data Bank (PDB files), and a 3D model of a protein and a corresponding ligand is created. The protein–ligand in-teraction is then mapped to force feedback on the haptic device, using our toolkit for proxy-based volumetric haptic rendering.7 The application is implemented on a Phantom device with collocated stereoscopic visual rendering.

Figure 2. Haptic exploration of protein docking. The docking “force” is perceived through the haptic interaction and is collocated with visual stereoscopic images of the involved molecules.

15May/June 2018 www.computer.org/cga

Page 13: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 11

IEEE COMPUTER GRAPHICS AND APPLICATIONS

This application was tested in several learning environments and at the Visualization Center. Through extensive evaluation, it was concluded that the haptic system helped students to learn more about the process of protein–ligand recognition. It also changed the way they reasoned about molecules to include more force-based explanations. When the system was used by upper secondary students and the audience at the Visualization Center, they seemed to grasp the princi-ples of the process. From an information-processing standpoint, visual and haptic coordination was seen to offload the visual pathway by placing less strain on visual working memory. From an embodied-cognition perspective, visual and tactile sensorimotor interactions may promote the construction of knowledge. The system may also have prevented students from drawing errone-ous conclusions about the process. The results have cognitive and practical implications for the use of multimodal VR technologies in educational contexts.8 9

Bringing an Audience to Space Following a knowledgeable guide on an interactive exploration of space is the dominant form of planetarium shows, from historical planetariums with analog “star ball” projectors to the intro-duction of digital planetariums in the 1990s. The development of computer graphics and visuali-zation has paved the way for interactive and immersive experiences. An early example of such software was Uniview,10 which has been developed since 2003 and is now widely used in the planetarium community. A more recent example is OpenSpace,11 a NASA-funded initiative in open source interactive data visualization software designed to visualize the entire known uni-verse and portray our ongoing efforts to investigate the cosmos. It supports interactive, contextu-alized presentations of dynamic data from observations, simulations, and space mission planning and operations.

The versatility of OpenSpace has been used to create science communication events such as the one related to the passage of Pluto by the New Horizons spacecraft in 2015 (see Figure 3). In a live “domecast,” 11 sites were connected. Visualization of the position and workings of the spacecraft was produced using OpenSpace, effectively letting the audience fly with the space-craft toward Pluto. At the same time, mission experts were connected using videoconferencing to discuss the workings of the instruments, creating a “Science Live” experience. Another recent feature of OpenSpace is the ability to visualize planetary surfaces with extreme resolution. It is now possible to bring the audience to the surface of Mars in interactive exploration of the topog-raphy at 25-cm resolution.12

Figure 3. OpenSpace was used to visualize how the spacecraft New Horizons captured images of Pluto on 14 June 2015. In a live event, sites from all over the world were connected. The image shows the New York site with host Neil deGrasse Tyson.

With OpenSpace, our work has progressed in a new direction of mediated exploranation in large-scale immersive environments. We have found that the levels of audience engagement can be remarkably high, due mainly to the immediacy of the mediated, nonlinear exploration of more or less real-time data (as opposed to a preproduced movie). However, this requires significant skills and knowledge on behalf of the guide navigating space for the audience.

16May/June 2018 www.computer.org/cga

EDUCATION

explore the insides of interesting artifacts in museums (for an example, see Figure 1) in fact in-creases the proportion of visitors studying the actual artifacts. For a comprehensive example, re-fer to our work with the Gebelein Man mummy at the British Museum.4

Figure 1. Children at the Mediterranean Museum in Stockholm using our visualization table to interactively explore a data visualization of an ancient Egyptian mummy.

Perceiving the Forces in the Microcosmos For 30 to 40 years, visualization has been one of the most common communication methods in cell- and molecular-biology research to analyze large amounts of data—for example, the coordi-nates of thousands of atoms in large dynamic molecules and the rate constants for their interac-tions. The objects are very small (nm), their shapes do not have a direct resemblance to macroscopic visual metaphors, and the underlying processes are complex. Interactions at the mo-lecular level are of fundamental importance for all aspects of life and involve the participation of many weak forces. These interactions are dynamic, complex, and stochastic in nature but lead to highly regulated processes. In fact, they are highly counterintuitive,5,6 especially when described in text.

To explore the added value of multimodal interaction in visual learning and to enable improved understanding of molecular interactions, we created a haptic application named MolDock (see Figure 2). Molecular coordinates for a protein are imported from the Protein Data Bank (PDB files), and a 3D model of a protein and a corresponding ligand is created. The protein–ligand in-teraction is then mapped to force feedback on the haptic device, using our toolkit for proxy-based volumetric haptic rendering.7 The application is implemented on a Phantom device with collocated stereoscopic visual rendering.

Figure 2. Haptic exploration of protein docking. The docking “force” is perceived through the haptic interaction and is collocated with visual stereoscopic images of the involved molecules.

15May/June 2018 www.computer.org/cga

Page 14: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

12 ComputingEdge December 2019

EDUCATION

The OpenSpace collaboration will progressively include more features and data. In addition to advanced navigation interfaces, our current research agenda comprises implementation of tools to analyze data from the ongoing Gaia mission and improved support for dynamic, large-scale simulation data such as space weather and galaxy formation.

DESIGN PRINCIPLES FOR EXPLORANATION Our work on exploranation in various settings has been largely explorative in itself. We have been feeling our way, designing visualizations and interactions based on a general sense of the data and the audience, identifying and elaborating the most successful ideas. In this kind of ex-tended process, experience accumulates over time and eventually forms the basis for articulation and abstraction. This section summarizes our main insights in four design principles for explora-nation that may be of value for other designers, engineers, and storytellers involved in science communication.

In interaction design in general, the task of designing explanations entails knowing the topic, the audience, and the available media. The responsibility lies with the designer to construct an expla-nation that fits the communicative situation. Designing for exploration, on the other hand, is a matter of setting the stage and providing the props for users to explore the topic at hand. The de-signer exerts indirect influence by defining the extent of the subject matter to be explored and the properties of the exploration tools provided. Designing for exploranation, where explanation and exploration converge, becomes largely a question of accommodating mixed-initiative interaction, as the following four principles demonstrate.

Interleaving Explorative Microenvironments and Signposted Narratives An overall structure we find conducive to exploranation includes making curiosity-arousing parts of the data available for the users’ self-directed exploration. Within those parts, annotations and other techniques are used to provide local explanatory information, also under the users’ control. The explorative parts are linked together into an overall narrative structure, which is crafted in advance to tell a compelling story.

In our interactive multitouch-table productions, for example, annotation callouts are used to fo-cus the user’s attention on the parts of the 3D volume that are the most rewarding ones for deeper exploration. On a higher narrative level, different sections of the interactive experience are presented within an overall structure where explanatory information is presented in a naviga-tion interface to convey the big picture.

The guided space explorations exhibit similar narrative structures combining big-picture views (in a very literal sense!) and sweeping movement over great distances with close-up examina-tions of particular points of interest, such as the most detailed images of the Mars surface to ever be shown to the public. The framing of the presentation as a live event under the control of a knowledgeable guide makes the combination of big stories and in-depth examinations signifi-cantly more engaging than a prerecorded movie using the same narrative structure. From the au-dience point of view, a skilled performer can make the experience seem remarkably explorative even though there is no actual control of navigation or focus of attention.

Constraining Explorative Interactions while Leveraging Pliability In each explorative microenvironment, the explorative tools are carefully constrained to convey a sense of expressiveness and agency to the users while at the same time ensuring that views re-sulting from manipulations are never disappointing or difficult to interpret. The interaction is highly responsive, with a tight connection between user actions and resulting outputs, thus form-ing a pliable user experience that is known to facilitate explorative behavior in interactive visual-izations.13

17May/June 2018 www.computer.org/cga

Page 15: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 13

IEEE COMPUTER GRAPHICS AND APPLICATIONS

The tools to rotate, crop, and dynamically light the 3D volume in our interactive multitouch-table productions illustrate the combination of productive constraints and pliability. They do not offer complete freedom of manipulating orientation and perspective, but rather nudge the user toward staying within visually interesting views of the most topical content. The MolDock example il-lustrates the didactic benefits of extending this principle to multimodal interaction. Similarly, the need for semantic-level navigation tools in the guided space explorations instantiates the princi-ple of productive interaction constraints in an immersive environment (more on this later).

Foregrounding the Topic through Perceptual Layering Exploranation is facilitated by keeping the object of interest in focus in a very concrete sense. Exploration tool palettes are relegated to the visual periphery. Annotations and other comple-mentary information are visually subordinated to the topical objects. Icons and other parts of the supporting interface employ a subdued form and color scheme. This is a general principle in in-teraction and information design, and it is used consistently in the examples provided above.

Supporting Performative Interaction Engaging with an interactive installation in a public space entails the tacit taking on of different roles by audience members. These roles can include a performer navigating the interaction, a number of viewers engaging in dialogue with the performer, and a number of bystanders gaining serendipitous insights and balancing between engaging more deeply and moving on. This is quite different from designing for generic multiuser interaction.14

In the case of interactive DVR, the interaction paradigm employed for the interactive touch ta-bles epitomizes this principle with tools enabling an impromptu performer to take control through highly visible physical actions. It is commonplace in museums and galleries to observe a hotspot of interest and energy forming around one of our touch tables, where a visitor is explor-ing a dataset in the manner of performing for his or her group. The visibility and social inclusive-ness of the explorative actions invite strangers into a temporary social structure forming around objects of mutual interest.

In the guided space explorations, the role of the guide is performative in a more conventional and formalized sense, taking the audience through the story. What is more, the visually immersive environment poses unique requirements on the performer, in the visceral sense of creating a pleasant trajectory through space. Camera movement and other interaction needs to be slow, smooth, and steady in order to avoid motion sickness and other discomfort. More research is needed to create semantic-level “virtual pilots” supporting the performer in such settings.

CONCLUSION Exploranation methodology is in its infancy, and further research is needed on technology, com-municative strategies, and application domains to understand the possibilities and limitations. From a learning perspective, data-driven interactive visualization is a relatively new medium, with opportunities as well as challenges. It poses new and different requirements for the visuali-zation tools and for how to guide the experience. In both cases, the aim is to instill in the user or audience a sense of curiosity and a desire to understand, while considering the diverse levels of previous knowledge in heterogeneous audiences ranging from schoolchildren, to the general public, to domain experts. It is essential to provide not only interactive modes of self-directed exploration but also intuitive and correct feedback. This requires adaptive interfaces as well as knowledgeable and sensitive guidance.15–17

However, it is the story and how it is told that make the difference. Furthermore, among the skills needed for the guidance are the ability to simultaneously handle a wide range of precon-ceptions, adaptive storytelling skills, and knowledge of specific design principles for education, without causing cognitive overload. This article points to the need for research to meet these challenges—i.e., to integrate the cognitive and sociocultural perspectives and accommodate a diverse group of users in learning through exploranation.

18May/June 2018 www.computer.org/cga

EDUCATION

The OpenSpace collaboration will progressively include more features and data. In addition to advanced navigation interfaces, our current research agenda comprises implementation of tools to analyze data from the ongoing Gaia mission and improved support for dynamic, large-scale simulation data such as space weather and galaxy formation.

DESIGN PRINCIPLES FOR EXPLORANATION Our work on exploranation in various settings has been largely explorative in itself. We have been feeling our way, designing visualizations and interactions based on a general sense of the data and the audience, identifying and elaborating the most successful ideas. In this kind of ex-tended process, experience accumulates over time and eventually forms the basis for articulation and abstraction. This section summarizes our main insights in four design principles for explora-nation that may be of value for other designers, engineers, and storytellers involved in science communication.

In interaction design in general, the task of designing explanations entails knowing the topic, the audience, and the available media. The responsibility lies with the designer to construct an expla-nation that fits the communicative situation. Designing for exploration, on the other hand, is a matter of setting the stage and providing the props for users to explore the topic at hand. The de-signer exerts indirect influence by defining the extent of the subject matter to be explored and the properties of the exploration tools provided. Designing for exploranation, where explanation and exploration converge, becomes largely a question of accommodating mixed-initiative interaction, as the following four principles demonstrate.

Interleaving Explorative Microenvironments and Signposted Narratives An overall structure we find conducive to exploranation includes making curiosity-arousing parts of the data available for the users’ self-directed exploration. Within those parts, annotations and other techniques are used to provide local explanatory information, also under the users’ control. The explorative parts are linked together into an overall narrative structure, which is crafted in advance to tell a compelling story.

In our interactive multitouch-table productions, for example, annotation callouts are used to fo-cus the user’s attention on the parts of the 3D volume that are the most rewarding ones for deeper exploration. On a higher narrative level, different sections of the interactive experience are presented within an overall structure where explanatory information is presented in a naviga-tion interface to convey the big picture.

The guided space explorations exhibit similar narrative structures combining big-picture views (in a very literal sense!) and sweeping movement over great distances with close-up examina-tions of particular points of interest, such as the most detailed images of the Mars surface to ever be shown to the public. The framing of the presentation as a live event under the control of a knowledgeable guide makes the combination of big stories and in-depth examinations signifi-cantly more engaging than a prerecorded movie using the same narrative structure. From the au-dience point of view, a skilled performer can make the experience seem remarkably explorative even though there is no actual control of navigation or focus of attention.

Constraining Explorative Interactions while Leveraging Pliability In each explorative microenvironment, the explorative tools are carefully constrained to convey a sense of expressiveness and agency to the users while at the same time ensuring that views re-sulting from manipulations are never disappointing or difficult to interpret. The interaction is highly responsive, with a tight connection between user actions and resulting outputs, thus form-ing a pliable user experience that is known to facilitate explorative behavior in interactive visual-izations.13

17May/June 2018 www.computer.org/cga

Page 16: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

14 ComputingEdge December 2019

EDUCATION

ACKNOWLEDGMENTS The authors wish to thank the Swedish eScience Research Center (SeRC) and NASA for support of the OpenSpace project. The support of the Swedish Research Council for fund-ing of underpinning research is greatly acknowledged. The Knut and Alice Wallenberg Foundation is supporting the WISDOME project through a generous donation for develop-ment of dome productions and technology. The authors would also like to acknowledge the excellent support and contributions from the staff at the Norrköping Visualization Center C and the division for Media and Information Technology at Linköping University.

REFERENCES 1. C. Lundström, P. Ljung, and A. Ynnerman, “Local histograms for design of Transfer

Functions in Direct Volume Rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 6, 2006, pp. 1570–1579.

2. E. Sundén, A. Ynnerman, and T. Ropinski, “Image plane sweep volume illumination,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, 2011, pp. 2125–2134.

3. C. Lundström et al., “Interactive visualization of 3D scanned mummies at public ven-ues,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, 2016, pp. 1775–1784.

4. A. Ynnerman et al., “Interactive visualization of 3D scanned mummies at public ven-ues,” Communications of the ACM, vol. 59, no. 12, 2016, pp. 72–81.

5. K. Scalise et al., “Student Learning in Science Simulations: Design Features that Pro-mote Learning Gains,” Journal of Research in Science Teaching, vol. 48, no. 9, 2011, pp. 1050–1078.

6. M.J. Jacobson, “Problem solving, cognition, and complex systems: Differences be-tween experts and nov-ices,” Complexity, vol. 6, no. 3, 2001, pp. 41–49.

7. K. Lundin et al., “Enabling design and interactive selection of haptic modes,” Virtual Reality Journal, vol. 11, no. 1, 2007, pp. 1–13.

8. P. Bivall, S. Ainsworth, and L.A.E. Tibell, “Do Haptic Representations Help Complex Molecular Learning?,” Science Education, vol. 95, no. 4, 2011, pp. 700–719.

9. K.J. Schönborn, P. Bivall, and L.A.E. Tibell, “Exploring relationships between stu-dents’ interaction and learning with a haptic virtual biomolecular model,” Computers and Education, vol. 57, no. 3, 2011, pp. 2095–2105.

10. S. Klashed et al., “Uniview - Visualizing the Universe,” Eurographics 2010 - Areas Papers, 2010, pp. 37–43.

11. A. Bock et al., “Openspace: An open-source astrovisualization framework,” Journal of Open Source Software, vol. 2, no. 15, 2017, p. 281.

12. K. Bladin et al., “Globe browsing: Contextualized spatio-temporal planetary surface visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, 2018, pp. 802–811.

13. J. Löwgren, “Pliability as an experiential quality: Exploring the aesthetics of interac-tion design,” Artifact, vol. 1, no. 2, 2007, pp. 85–95.

14. P. Dalsgaard and L.K. Hansen, “Performing perception—staging aesthetics of interac-tion,” ACM Transactions on Computer-Human Interaction, vol. 15, no. 3, 2008, p. 13.

15. D. Clark, E. Tanner-Smith, and S. Kolloingsworth, “Digital games, Design and Learn-ing: A Systematic review and Meta–Analysis,” Review of Educational Research, vol. 86, no. 1, 2016, pp. 79–122.

16. C. D'Angelo et al., “Simulations for STEM Learning: Systematic Review and Meta-Analysis,” SRI International, 2014.

17. C. Zahn, “Digital Design and Learning: Cognitive-Constructivist Perspectives,” The Psychology of Digital Learning, Schwan S., Cress U., Springer, 2017.

19May/June 2018 www.computer.org/cga

Page 17: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 15

IEEE COMPUTER GRAPHICS AND APPLICATIONS

ABOUT THE AUTHORS Anders Ynnerman holds the chair in scientific visualization at Linköping University, is the director of the Norrköping Visualization Center C, and is a research professor at the Univer-sity of Utah. Contact him at [email protected].

Jonas Löwgren is professor of interaction and information design at Linköping University. Contact him at [email protected].

Lena Tibell is a professor of biochemistry and visual learning and communication at Lin-köping University. Contact her at [email protected].

Contact department editor Beatriz Sousa Santos at [email protected] and department editor Ginger Alford at [email protected].

20May/June 2018 www.computer.org/cga

EDUCATION

ACKNOWLEDGMENTS The authors wish to thank the Swedish eScience Research Center (SeRC) and NASA for support of the OpenSpace project. The support of the Swedish Research Council for fund-ing of underpinning research is greatly acknowledged. The Knut and Alice Wallenberg Foundation is supporting the WISDOME project through a generous donation for develop-ment of dome productions and technology. The authors would also like to acknowledge the excellent support and contributions from the staff at the Norrköping Visualization Center C and the division for Media and Information Technology at Linköping University.

REFERENCES 1. C. Lundström, P. Ljung, and A. Ynnerman, “Local histograms for design of Transfer

Functions in Direct Volume Rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 6, 2006, pp. 1570–1579.

2. E. Sundén, A. Ynnerman, and T. Ropinski, “Image plane sweep volume illumination,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, 2011, pp. 2125–2134.

3. C. Lundström et al., “Interactive visualization of 3D scanned mummies at public ven-ues,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, 2016, pp. 1775–1784.

4. A. Ynnerman et al., “Interactive visualization of 3D scanned mummies at public ven-ues,” Communications of the ACM, vol. 59, no. 12, 2016, pp. 72–81.

5. K. Scalise et al., “Student Learning in Science Simulations: Design Features that Pro-mote Learning Gains,” Journal of Research in Science Teaching, vol. 48, no. 9, 2011, pp. 1050–1078.

6. M.J. Jacobson, “Problem solving, cognition, and complex systems: Differences be-tween experts and nov-ices,” Complexity, vol. 6, no. 3, 2001, pp. 41–49.

7. K. Lundin et al., “Enabling design and interactive selection of haptic modes,” Virtual Reality Journal, vol. 11, no. 1, 2007, pp. 1–13.

8. P. Bivall, S. Ainsworth, and L.A.E. Tibell, “Do Haptic Representations Help Complex Molecular Learning?,” Science Education, vol. 95, no. 4, 2011, pp. 700–719.

9. K.J. Schönborn, P. Bivall, and L.A.E. Tibell, “Exploring relationships between stu-dents’ interaction and learning with a haptic virtual biomolecular model,” Computers and Education, vol. 57, no. 3, 2011, pp. 2095–2105.

10. S. Klashed et al., “Uniview - Visualizing the Universe,” Eurographics 2010 - Areas Papers, 2010, pp. 37–43.

11. A. Bock et al., “Openspace: An open-source astrovisualization framework,” Journal of Open Source Software, vol. 2, no. 15, 2017, p. 281.

12. K. Bladin et al., “Globe browsing: Contextualized spatio-temporal planetary surface visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, 2018, pp. 802–811.

13. J. Löwgren, “Pliability as an experiential quality: Exploring the aesthetics of interac-tion design,” Artifact, vol. 1, no. 2, 2007, pp. 85–95.

14. P. Dalsgaard and L.K. Hansen, “Performing perception—staging aesthetics of interac-tion,” ACM Transactions on Computer-Human Interaction, vol. 15, no. 3, 2008, p. 13.

15. D. Clark, E. Tanner-Smith, and S. Kolloingsworth, “Digital games, Design and Learn-ing: A Systematic review and Meta–Analysis,” Review of Educational Research, vol. 86, no. 1, 2016, pp. 79–122.

16. C. D'Angelo et al., “Simulations for STEM Learning: Systematic Review and Meta-Analysis,” SRI International, 2014.

17. C. Zahn, “Digital Design and Learning: Cognitive-Constructivist Perspectives,” The Psychology of Digital Learning, Schwan S., Cress U., Springer, 2017.

19May/June 2018 www.computer.org/cga

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field.

MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. OMBUDSMAN: Email [email protected] COMPUTER SOCIETY WEBSITE: www.computer.org

EXECUTIVE COMMITTEEPresident: Cecilia Metra; President-Elect: Leila De Floriani; Past President: Hironori Kasahara; First VP: Forrest Shull; Second VP: Avi Mendelson; Secretary: David Lomet; Treasurer: Dimitrios Serpanos; VP, Member & Geographic Activities: Yervant Zorian; VP, Professional & Educational Activities: Kunio Uchiyama; VP, Publications: Fabrizio Lombardi; VP, Standards Activities: Riccardo Mariani; VP, Technical & Conference Activities: William D. Gropp; 2018–2019 IEEE Division V Director: John W. Walz; 2019 IEEE Division V Director Elect: Thomas M. Conte; 2019–2020IEEE Division VIII Director: Elizabeth L. Burd

BOARD OF GOVERNORSTerm Expiring 2019: Saurabh Bagchi, Gregory T. Byrd, David S. Ebert, Jill I. Gostin, William Gropp, Sumi Helal Term Expiring 2020: Andy T. Chen, John D. Johnson, Sy-Yen Kuo, David Lomet, Dimitrios Serpanos, Forrest Shull, Hayato YamanaTerm Expiring 2021: M. Brian Blake, Fred Douglis, Carlos E. Jimenez-Gomez, Ramalatha Marimuthu, Erik Jan Marinissen, Kunio Uchiyama

2020 BOARD OF GOVERNORS MEETING22 – 23 January: Costa Mesa, California

EXECUTIVE STAFFExecutive Director: Melissa A. Russell; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: Sunny Hwang; Director, Information Technology & Services: Sumit Kacker; Director, Marketing & Sales: Michelle Tubb; Director, Membership Development: Eric Berkowitz

COMPUTER SOCIETY OFFICESWashington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928; Phone: +1 202 371 0101; Fax: +1 202 728 9614;Email: [email protected] Alamitos: 10662 Los Vaqueros Cir., Los Alamitos, CA 90720; Phone: +1 714 821 8380; Email: [email protected]

MEMBERSHIP & PUBLICATION ORDERS: Phone: +1 800 678 4333; Fax: +1 714 821 4641; Email: [email protected]

IEEE BOARD OF DIRECTORSPresident & CEO: Jose M.D. Moura President-Elect: Toshio FukudaPast President: James A. JefferiesSecretary: Kathleen Kramer Treasurer: Joseph V. LillieDirector & President, IEEE-USA: Thomas M. Coughlin; Director & President, Standards Association: Robert S. Fish; Director & VP, Educational Activities: Witold M. Kinsner; Director & VP, Membership and Geographic Activities: Francis B. Grosz, Jr.; Director & VP, Publication Services & Products: Hulya Kirkici; Director & VP, Technical Activities: K.J. Ray Liu

revised 10 October 2019

This article originally appeared in IEEE Computer Graphics and Applications, vol. 38, no. 3, 2018.

Page 18: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

16 December 2019 Published by the IEEE Computer Society 2469-7087/19 © 2019 IEEE4 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 9 © 2 0 1 9 I E E E

SPOTLIGHT ON TRANSACTIONS

A s we take part in the revolution of big data, visualization has become an important mecha-nism for enabling human-in-the-loop data an-alytics and data-driven decision making. Since

visual representations of data and information greatly ap-peal to the fast cognitive facilities of the human brain, they form a powerful interface to connect the human to the data and any advanced machine-learning algorithms that could be operating on them. This human–machine discourse is commonly referred to as visual analytics.

Data visualization begins with the data that are transformed into a graphical representation by a given visualization method. At this point, the rendered visualization is just an image, which the human observer’s perceptual and cognitive facilities subsequently convert into insight. But not all visualizations succeed in this crucial latter task, and many do not even entice a human to en-gage, that is, look and spend the

effort to study it. While the latter is not yet fully under-stood, it is related to aesthetics and graphic appeal.

Selecting the best visualization for a given data con-figuration and analytical goal is difficult. This might be surprising to some and is one of the reasons that work in this research area is so important. The visualization design begins with deciding what basic chart is best suited for the type (that is, categorical, numerical, tem-poral, string) and size (that is, number of columns and rows) of the data at hand. The design then continues with choosing the right set of colors, the appearance of marks, graphical detail, aspect ratio, chart orientation, and so on.

Digital Object Identifier 10.1109/MC.2019.2918513Date of publication: 30 July 2019

Advances in Visualization Recommender SystemsKlaus Mueller, Stony Brook University

This installment of Computer’s series

highlighting the work published in IEEE

Computer Society journals comes from

IEEE Transactions on Visualization and

Computer Graphics.

Page 19: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 17A U G U S T 2 0 1 9 5

EDITOR RON VETTER University of North Carolina Wilmington; [email protected]

There are two principal factors that govern the graphical design of a data visualization and promote a human’s understanding of the data and the in formation they represent, namely, ex-pressiveness and effectiveness. A visu-alization is expressive when it is able to communicate all information in the data; it is effective when it does so better than an alternative visualization.

Since a mainstream user cannot be expected to possess the kind of exper-tise needed to design the most expres-sive and effective chart for his or her data, au-tomating this design task has been an active area of research. In fact, several research sys-tems on this topic had already emerged in the late 1980s. These were essentially rule-based systems informed by con-temporary fundamental research on visual information encoding. While effective, rule-based systems have the inherent limitations that the coding of rules is a tedious process and execut-ing a rule-based system can be prone to combinatorial explosion.

The featured spotlight article is “Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco” (IEEE Transactions on Visualization and Com-puter Graphics, vol. 25, no. 1, pp. 438–448, 2019) by Dominik Moritz, Chenglong Wang, Greg L. Nelson, Halden Lin, Adam M. Smith, Bill Howe, and Jef-frey Heer. It addresses this problem in a unique and forward-looking manner by taking advantage of advanced machine learning and a set of empirical studies on visualization designs that recently became available. These studies pro-duced many user-rated visualization designs and represent a considerable

amount of visualization design knowl-edge and best practices that the au-thors use ef fectively. Another en-abling techn ol o g y is Vega-L ite, a high-level grammar-for-visualization design that was developed in the au-thors’ research laboratory and described in a different set of articles. Vega-Lite maps data to properties of graphical marks, that is, points or bars, and an associated compiler then automatically produces visualization components in-cluding axes, legends, and scales.

The Draco system, which is the subject of this article, uses answer-set programming in conjunction with a weight-learning method to represent and apply the visualization design knowledge gained through the afore-mentioned empirical studies. The de-sign knowledge is encoded as a set of constraints used to design future visual representations. Draco uses the an-swer-set solver to evaluate the set of con-straints when generating or suggesting a visualization. Some constraints can be soft, which enables tradeoffs for us-ing a particular visual design instead of another. In such cases, these soft con-straints are weighted, allowing the sys-tem to compose a list of the best candi-dates. The weights can be set manually or learned with a RankSVM classifier. Users can then input their data, and Draco recommends a list of visualiza-tions, ranked by their model-based pref-erence scores.

The system is also notable in that researchers can use Draco to encode

findings they generate from their own (and other) empirical studies. These findings can then be captured as con-straints to construct, augment, or up-date a new or existing visualization recommender and validator system.

These are undoubtedly exciting times for data visualization use and research. The emerg-

ing web-scale community databases of visualization designs are great en-

ablers in the quest to expressively and effec-tively design visualiza-tion of data and bring it to the mainstream arena. The development of Draco, and other sys-tems that may emerge,

empowers everyone to generate com-pelling, yet truthful, visualizations of data. They can then share this information with others, enabl i ng them to form their own opinion on the substance of the data and the in-formation conveyed. The ensui ng democratization of insight has the propensity to propel both science and society forward and lead to a better in-formed world as a whole.

KLAUS MUELLER is a professor with the Computer Science Department, Stony Brook University; a senior scientist with the Computational Science Initiative, Brookhaven National Laboratory; and the editor-in-chief of IEEE Transactions on Visualization and Computer Graphics. Contact him at [email protected].

The ensuing democratization of insight has the propensity to propel both science

and society forward.

4 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 9 © 2 0 1 9 I E E E

SPOTLIGHT ON TRANSACTIONS

A s we take part in the revolution of big data, visualization has become an important mecha-nism for enabling human-in-the-loop data an-alytics and data-driven decision making. Since

visual representations of data and information greatly ap-peal to the fast cognitive facilities of the human brain, they form a powerful interface to connect the human to the data and any advanced machine-learning algorithms that could be operating on them. This human–machine discourse is commonly referred to as visual analytics.

Data visualization begins with the data that are transformed into a graphical representation by a given visualization method. At this point, the rendered visualization is just an image, which the human observer’s perceptual and cognitive facilities subsequently convert into insight. But not all visualizations succeed in this crucial latter task, and many do not even entice a human to en-gage, that is, look and spend the

effort to study it. While the latter is not yet fully under-stood, it is related to aesthetics and graphic appeal.

Selecting the best visualization for a given data con-figuration and analytical goal is difficult. This might be surprising to some and is one of the reasons that work in this research area is so important. The visualization design begins with deciding what basic chart is best suited for the type (that is, categorical, numerical, tem-poral, string) and size (that is, number of columns and rows) of the data at hand. The design then continues with choosing the right set of colors, the appearance of marks, graphical detail, aspect ratio, chart orientation, and so on.

Digital Object Identifier 10.1109/MC.2019.2918513Date of publication: 30 July 2019

Advances in Visualization Recommender SystemsKlaus Mueller, Stony Brook University

This installment of Computer’s series

highlighting the work published in IEEE

Computer Society journals comes from

IEEE Transactions on Visualization and

Computer Graphics.

This article originally appeared in Computer, vol. 52, no. 8, 2019.

Page 20: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

18 December 2019 Published by the IEEE Computer Society 2469-7087/19 © 2019 IEEE72 September/October 2018 Copublished by the IEEE Computer and Reliability Societies 1540-7993/18/$33.00 © 2018 IEEE

DRONES

Current US regulation is not equipped to provide explicit privacy protection for drone use in an era of sophisticated audio/video and social media. In 2016, the National Telecommunications and Information Association recognized this deficit by releasing a set of best practices, which we examine in light of the current privacy concerns with drone use in the US.

W e tend to think of drones—or, more properly, “unmanned aerial vehicles” or “UAVs”—as a

new phenomenon. However, the concept of using pilot-less aircraft for military purposes has been around since at least the American Civil War period. Civilians have also used UAVs for recreation since remote controlled model aircraft were invented. Some of the problems with civilian UAV use also date back to those early days, including the risk of collision with people or property and interference with other aircraft or vehicles.

However, modern computing power and connectiv-ity have changed both the functionality and the capa-bilities of drones. This article deals with one of those new risks: the potential for onboard video and audio to intrude on personal privacy.

Aerial photography has always had some capacity to affect privacy, and the prevalence of cell phone cam-eras as well as CCTV systems means that our images are being captured and potentially republished more than ever before. However, the use of drones, coupled with easy access to publication channels such as social media, create particularly intense new challenges. Cur-rent drone regulations in the US provide little in the way of explicit privacy protection. While the National

Telecommunications and Information Administration (NTIA) has produced “best practices” to guide users about how to manage drones in a privacy-protective way, this framework does not have the force of regulation (https://www.ntia.doc.gov/files/ntia/publications /uas_privacy_best_practices_6-21-16.pdf). There are also practical and conceptual difficulties with how some aspects of those guidelines apply to the drone environment.

A Brief History of DronesUAVs—broadly defined as aircraft without a pilot—have an extensive history beyond what individuals have seen in headlines or their local store today. This tech-nology has significant roots in military use before being adopted by civilians and emergency services.

Military DevelopmentsUAVs—broadly defined as aircraft without a pilot—have a long history of development dating back to the Civil War. When UAVs were first conceptualized, they essentially consisted of loading up balloons with explo-sives, and users hoped that they would land on their intended targets. Precision was impossible. Missile

Privacy and Civilian Drone Use: The Need for Further RegulationStephanie Winkler | Pennsylvania State UniversitySherali Zeadally | University of KentuckyKatrine Evans | Hayman Lawyers

Page 21: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 19www.computer.org/security 73

technology had improved significantly by World War II, but unmanned aircrafts were still in their infancy. For example, the Japanese launched balloons carrying explosives with the theory that air currents would carry the balloons to a target. This was labeled a failure.1

The US experimented with UAVs in WWII as well, but instead of using balloons they used planes. The plane was loaded with enough explosives to essentially make it a bomb. Pilots would fly the plane to a certain point, where (everyone hoped) they would bail out safely, and then a nearby aircraft would control the unmanned plane via radio control to its target. This was known as Oper-ation Aphrodite, and had mixed rates of success.1 This approach, though, is much closer to how UAVs operate today, with someone controlling the aircraft via remote without actually having to set foot in a cockpit.

Since the 1940s, UAVs continued to be developed by the military and have been used in multiple wars for reconnaissance missions; photography and video were integral to their operation, including surveillance of people and their movements. UAVs gained the capa-bility of carrying weapons only around 2002 during the war in Afghanistan. It is not coincidental that around this time the budget for UAVs increased significantly with one billion dollars allocated in the US budget for 2003.2

Civilian and Emergency Service DevelopmentsFrom their origins in the 1930s, remote controlled model aircraft rapidly became popular, including being a perennial item on Christmas present lists for both children and adults. Specialized competitions and clubs sprang up. Instructors were available to help novice pilots learn to fly their increasingly sophisti-cated vehicles and avoid hazards. Recent technological developments and decreasing costs of basic drones have expanded the civilian market enormously. This includes the development of commercial applications for drone use, including agriculture, oil/gas, media, real estate, and filmmaking.3 Drones also have increasing value for emergency services where they enable close view with-out putting people at risk, for instance in police search and rescue operations in inaccessible regions, bush-fires, and assessment of unstable buildings or accident sites. As of August 2016, there were roughly 20,000 drones registered with the Federal Aviation Adminis-tration (FAA) for commercial purposes. This number is expected to grow to nearly 600,000 after a change in FAA regulations made it easier for someone to be certi-fied to pilot a drone.3 In 2016, drone sales for private use grew to $200 million, with models equipped with cam-eras and global positioning systems (GPS) dominating the market.4

However, regulation of drone use has been slower to develop than the market and is still arguably playing catch-up. For instance, in the US Congress passed legis-lation in 2012 directing the FAA to draft rules governing the use of commercial drones by 2015. Similarly, in New Zealand, the Civil Aviation Authority did not pass rules relating to drone use (or “remote piloted aircraft” as it formally calls them) until late in 2015.

Privacy ImplicationsThe proliferation of drone use over the past few years has significantly increased the privacy concerns related to their deployment. Most of this is because drone capa-bilities have increased over the years to allow for sophis-ticated surveillance and data collection. The issues include the following:

■ The majority of drones are equipped with—or designed to carry—cameras that both record pictures and video and are capable of storing and transmitting this information either immediately or for later view-ing (typically through online upload). Some are also audio capable.

■ The immediacy and normality of capturing and pub-lishing information creates a significant need for user awareness of risks and a clear ethical framework within which to act. Taking privacy-invasive material down after the event is possible (though it may still persist in some form), but the damage may well have been done by then. However, the availability of drones has galloped ahead of clear guidance or agreed etiquette—let alone formal rules—on how to use them successfully.

■ Drones are typically fairly small, especially when used by private citizens, with some capable of fitting in the palm of a hand. This allows them to be relatively invisible—that is, to enable covert surveillance—and to get into small places that a person cannot reach.

■ Drones can also reach heights that are not accessible to other forms of commonly used modern devices such as the ubiquitous cell phone camera. They can therefore capture information that is well beyond what a normal cell phone user would be able to view: they are eyes in the sky.

■ Another major factor in drone capabilities that creates additional privacy risks is their cost. Traditionally, surveillance of this type would be highly expensive, but drones today are inexpensive, with the average cost of those sold in the US in 2016 amounting to around $500.4 At this price, it is relatively easy for a large number of people to become model aircraft users, and it increases the likelihood of entry from commercial organizations.

■ One effective method of protecting the privacy of an individual when collecting data is to ensure that there

Page 22: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

20 ComputingEdge December 201974 IEEE Security & Privacy September/October 2018

DRONES

are no personal identifiers within the data, either by ensuring one does not collect any personal data to begin with or by anonymizing the data (for instance by pixelation or blurring5). However, it is unrealistic for the average personal user to engage such techniques. Even commercial entities may struggle. This form of advanced technology is relatively expensive and time consuming and would be less likely to be found on commercial or private drones. With the private use of drones, there is not a good mechanism to ensure that someone will take these privacy protection measures. Furthermore, at this point in time there are very few requirements for commercial companies to protect visual data in this way.

■ While the law often does not extend to protect pri-vacy in public places, because reasonable expectations of privacy are said to be limited, information captured by drones may well be different because they can cap-ture what is not otherwise visible. Expectations of pri-vacy can still be intact.

■ Drones can capture images of people in their own homes: the place where we most obviously have a reasonable expectation of privacy. For instance, over the past few years, news outlets have reported sev-eral instances where drones have flown to someone’s window or over a private residence’s yard.6 Even the incursion of the aircraft is an intrusion, and when these drones are equipped with cameras, this is typi-cally considered a privacy violation.7

■ It is difficult to know just from a visual inspection of the drone in flight if a camera is actually on and recording. Flying a drone over someone’s property cannot constitute trespassing according to a recent court case,6 though the law will differ from place to place. This makes it uncertain whether individuals can take action if they believe that someone is survey-ing them using a drone: proving what was recorded may be hard, and dealing with the incursion itself may also be legally problematic. As drones continue to have increasing technological capabilities, surveil-lance could be conducted far enough away from a property that the target could be completely unaware that he or she is being watched. While this is useful for law enforcement agencies, it is equally problematic when drones are available for private use. This opens the door to making certain types of criminal activity easier (for instance, stalking, peeping).

Current Regulations for Drones in the USFor reasons of space, this article considers only the US federal regulations for drones. However, regulations in other jurisdictions typically pick up similar themes and approach the issues in broadly similar ways. We rec-ognize that many states have already passed their own

drone regulation; however, much of this regulation overlaps with the FAA regulations discussed here, with few laws specifically focusing on privacy.

Regulations for drone use in the US are split in dif-ferent ways. First, there are three categories of drone use: commercial, model aircraft, and military. The FAA is charged with regulating who can fly drones, pilot requirements, drone registration, and what airspace a drone is authorized to use for the two civilian groups (that is, commercial and model aircraft). The military tends to govern itself regarding the use of drones.

The FAA regulations for commercial use do not dif-fer very much from model aircraft use. Some of the most noteworthy differences are related to the pilot require-ments and the operating rules.

There is no pilot requirement for model aircraft use. However, to fly a drone for commercial purposes, a Remote Pilot Airman Certificate and Transportation Security Administration (TSA) vetting are required, and there is an age restriction (16). The ease with which someone might obtain a Remote Pilot Airman Certifi-cate depends on whether they already possess another form of pilot license. The process is much easier if they do. However, for a first-time pilot, this certificate requires proficiency in the English language and the passing of an aeronautical skills test. This certificate expires after two years, at which point the pilot must take a refresher course to renew the certificate.

The operating rules for commercial and model air-craft uses are similar, but as with the pilot requirements, much more is explicitly stated for commercial use. The two FAA regulations that the commercial and model aircraft groups have in common are that the drone must be flown within the pilot’s line of sight and it must yield to any manned aircraft. Other than these two com-monalities, it appears that commercial use of drones is regulated more heavily than model aircraft use. How-ever, after examining the community safety guidelines that the FAA refers to for the model aircraft group, it becomes clear that this is not the case. The commu-nity safety guidelines include many of the regulations imposed on commercial drone use, including not fly-ing over groups of people and/or events. In addition to these stipulations, the safety guidelines also include some rules that the commercial regulations do not explicitly state, such as recommending that pilots not fly under the influence and not fly near emergency situa-tions. A summary of the regulations for commercial and model aircraft use can be found in Table 1.

Privacy Best Practices (“Guidelines”)In addition to the regulations discussed earlier, the NTIA has also released a set of best practice guide-lines in relation to privacy and drone use. They differ

Page 23: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 21www.computer.org/security 75

from the regulations discussed above because they are merely recommendations for self-regulation and are not required by law. These guidelines specifically refer to practices related to data collected using drones, includ-ing transparency of data collection, methods of data col-lection, sharing, and storage of the data.

Like the FAA regulations for the use of drones, NTIA guidelines are separated into different sections: com-mercial, neighborly, and press. The press is included in addition to the first two categories in the best practices in order to carve out an exemption. Newspapers and news agencies, while commercial entities, are not asked to follow the guidelines because of the First Amend-ment protection regarding free speech and freedom of the press. These organizations typically have their own set of ethical guidelines, which the NTIA has referenced instead.

The first two categories of commercial and neigh-borly correspond with the categories (commercial and model aircraft) that the FAA previously defined in the drone regulations. As with those regulations, the best practices for commercial entities are more substan-tive than neighborly use. However, all drone operators, regardless of whether they are commercial or model air-craft, are advised to use the full set of guidelines when-ever possible.

The major areas in the privacy guidelines are sum-marized in Table 2.

We discuss the main guidelines below.

Transparency and consent. Informed consent is at the heart of the guidelines. A commercial entity that expects to collect data from individuals must inform them about the potential data collection and obtain consent for this data collection from the individual. Consent is defined as “words or conduct indicating per-mission.” This consent can be implied or explicit. As a general principle, this is of course good practice, though the specifics outlined in the guidelines decrease the power of this principle.

The method recommended for notifying individuals about data collection practices is through the posting of a privacy policy that outlines the purpose for which the data is collected, what the data will be used for, what kind of data will be collected, how this data will be shared and with whom, how to submit complaints, and how the entity responds to law enforcement requests (https://fpf.org/w pcontent/uploads/2016/06 /UAS_Privacy_Best_Practices_6-21-16.pdf). To some extent, this meets transparency requirements (subject to the limitations below), but calling it “consent” is problematic.

First, privacy statements historically are not written with the best interest of the individual in mind, but with

the purpose of protecting the company from litigation. They typically take the form of extremely lengthy docu-ments and are written in high-level language.8,9 This results in very few individuals actually reading the doc-uments.10 Assuming informed consent from an individ-ual on the basis of a privacy policy alone is not going to be very effective in practice.

Second, and even more tellingly, there is the prob-lem with where the privacy policy would be posted. Typically, when privacy policies are discussed, it is in reference to an online platform or terms and conditions about some technology. In-time communication is rela-tively easy in such circumstances: the privacy policy is easily accessible for use via a web link. For physical products, the seller can include a piece of paper with the technology. However, drones involve a third party using a particular technology to potentially collect informa-tion about others. While the guidelines do state that the entity should make reasonable efforts to notify indi-viduals before the data collection takes place, they do not state that they should include their privacy policy in that notification. Even if one implies that they must, and even if that policy is publicly available, it does not mean it is easy to find or that it is reasonable and practicable for individuals to take the necessary steps to go and find it. Frankly, it is often impractical for drone users to

Table 1. FAA civilian drone regulations.*

Commercial Model aircraft

Pilot requirements

Must have passed TSA vettingMust have a Remote Pilot Airman CertificateMust be over the age of 16

None

UAV requirements

Must be under 55 lbs, or be registered if over 0.55 lbsMust undergo preflight check

Must be under 55 lbs, or be registered if over 0.55 lbs

Location G airspace Five miles from an airport (without prior authorization)

Operating rules

UAV must:– Yield to any manned aircraft– Stay within line of sight– Fly under 400 feet– Fly at less or equal to

100 mph– Not fly over people– Not fly from moving vehicle

UAV must:– Yield to any manned

aircraft– Stay within line of

sight– Follow community-

based safety guidelines

* Table adapted from Federal Aviation Administration website: https://www.faa.gov /uas/getting_started.

Page 24: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

22 ComputingEdge December 201976 IEEE Security & Privacy September/October 2018

DRONES

inform people before or at the time of collection that the collection is occurring (the collection be incidental to the purpose of using the drone), and tracing the rel-evant individuals after the event to ask for consent may not always be possible.

The third problem lies with the notion of consent itself. Consent, as outlined in these guidelines, can be explicit or implied. If the entity makes a reasonable effort to notify individuals about data collection, it appears that the entity would be able to assume that no response from these individuals means that they con-sent. This is more commonly known as “opting out.”

When an individual opts in for a program, there is an assumption that unless the entity hears something from the individual, then the answer is “no.” Realistically, the nature of the activity and issues of timing mean that individuals are not likely to be given the choice to opt in, however preferable this might be from a privacy perspective.11

Transparency is an important privacy protection, but there are real questions about the practicality of making that information available and the validity of asking for consent in the drone environment. Relying on a privacy policy alone to establish implied consent is conceptually problematic. However, creating mecha-nisms to obtain consent is useful and worthwhile if the drone use is a planned activity (such as filming a sport-ing event) where it is obvious that information about individuals will be captured and particularly where the individuals may be the subjects of the footage.

When images of identifiable individuals are captured incidentally or accidentally as part of the drone activity, though, consent may be impracticable, and reliance on a privacy policy to provide consent does not work at all. Notice and choice are simply insufficient to protect pri-vacy. Much greater emphasis needs to be placed on the other safeguards covered in the guidelines.

Covered data. The document is very specific about what types of data are “covered.” Covered data is any data that “identifies a particular person.” If the information cannot be linked to an individual’s name or any other personally identifiable information (PII), then it is not covered data. This is a significant departure from cur-rent privacy laws and policy where protected data is limited to specifically defined types of personally iden-tifiable information. It goes beyond what is traditionally protected. In our view, it does this in a way that appears justified and necessary in the context.

Historically, in US law (though not in jurisdictions with broader privacy laws), protecting PII is closely related to protecting against identity theft. This means that the items included in the definition of PII are typi-cally limited to name, Social Security number, driver’s license number, bank account numbers, and address. It does not include still or moving images or audio files, which are covered by the guidelines. At this time, mul-tiple technologies exist that can identify an individual based on various attributes including voice and likeness (for example, image).12 This expanded definition of PII is necessary when discussing drones because the tradi-tional forms of data collected by drones would not be covered by current definitions of protected data. The guidelines in this case provide more privacy protection than many laws (for example, the Telecommunications Act) today.

Table 2. Privacy best practice summary.

Area of interest Guidelines provided

Privacy policy Should be publicly postedShould contain information regarding:– The purpose for the data collection– Types of covered data collected– Data storage and anonymization practices– Procedures for submitting complaints/concerns– Reponses to law enforcement requests

Data collection Operators should not:– Collect covered data where the individual has an

expectation of privacy*†

– Continuously collect covered data of individuals*†

Data storage Operators should avoid retaining data longer than necessary*†

Data security Operators should develop an information security policy from one of these known standards:– Federal Trade Commission– NIST Cybersecurity Framework– ISO 27001 standard for information security

management

Data use Data should not be used for:– Employment eligibility, retention, or promotion†

– Credit eligibility†

– Healthcare treatment eligibility†

– Any purpose not listed in the privacy policy– Marketing purposes†

Data sharing Data should not be shared:– With anyone not listed in the privacy policy– Publicly†

– For marketing purposes†

* Unless there is a compelling reason to do otherwise.† Unless consent is obtained. Table constructed from the National Telecommunications and Information Administration Privacy Best Practices for UAS.

Page 25: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 23www.computer.org/security 77

The guidelines cover only information that is capable of leading to an individual’s identification. They state that information is not covered if it “likely will not be linked to an individual’s name or any other personally identifiable information, or if the data is altered so that a specific per-son is not recognizable.” However, it is well known that anonymization is not effective in protecting an individu-al’s privacy: see for instance the studies on re-identifying individuals from combined datasets.13 Facial recognition technologies are improving significantly, including image recognition that can identify people out of photographs posted on social media. There are currently methods that can permanently obscure an image, but these require an extra step after data collection.5

It is foreseeable that many entities that apply the guidelines will underestimate the likelihood of indi-viduals being identified, for instance, if they post the footage on social media (which is permitted only if nec-essary, as we pointed out earlier on the problems with consent and necessity above). It is common for agen-cies to look at such clauses through the lens of their own intentions, not to look at what is objectively possible in the context—that is, if the agency itself does not intend to identify those individuals, it may assume that the data is not covered by the guidelines. It may therefore be beneficial to expressly alert unmanned aerial system (UAS) operators to the need to take care when publish-ing images in situations that may allow others to iden-tify individuals who are featured. This includes using images in advertising or posting them online.

Data collection, storage, and sharing practices. The remainder of the privacy guidelines focuses on the col-lection, storage, and sharing of covered data. In gen-eral, unless the operator of the drone has received the explicit consent of the individual or has a compelling reason, he or she should refrain from collecting data. The guidelines do not define the term “compelling rea-son” in order to give the practices more flexibility for different entities that would use them.

Using a strong term such as “compelling reason” is a useful signal that if consent is not possible, there must be a clearly justifiable reason for the collection. How-ever, not defining this term hurts individuals who could be the subject of data collection because what they would define as a “compelling reason” will not always be the same definition of the operator. The decision to not define this term leaves far more room for interpretation than what is necessary for this document.

Instead, the guidelines could make it plain that a “compelling reason” will only exist where:

■ the personal information is collected for a lawful purpose;

■ the data collection with drones is genuinely relevant to what the agency does;

■ the potential for an individual to suffer harm from a breach of their privacy is limited; and

■ there is no realistic and less intrusive alternative to collecting this information in order to meet the agency’s aim—that is, the collection is effective and proportionate.

Drone operators should also avoid retaining col-lected data for an extended period of time. When the data is retained, there should also be efforts to keep this data secure. Specifically, the guidelines state that the entity should “implement a program [with] reason-able administrative, technical, and physical safe-guards appropriate to the operator’s size and complexity, the nature and scope of its activities, and the sensitivity of the covered data.” This language is quite vague and bureaucratic. Both the words reasonable and appropri-ate are open to interpretation. What seems reasonable or appropriate to one operator may seem excessive to another. Because the guidelines never define either of these terms, there is no unified expectation for privacy protections.

The guidelines recognize the problem and have referred to several reputable organizations (that is, the Federal Trade Commission, NIST Cybersecurity Framework, and the ISO) that operators can use as resources to develop a good information security pol-icy. Given that these are privacy best practices and not regulations, referencing these standards of information security is helpful. However, smaller drone operators may struggle to unpack what these external resources mean (particularly the NIST and ISO frameworks) and how they can apply the relevant points to their own situations. If the guidelines are to be observed, it is important to make life easier for small businesses and individuals who will use them. We suggest that the guidelines could distill the main points from the exter-nal guidance and communicate it clearly to users (per-haps in the form of a “top ten” list of things to check or implement).

The final area that the best practices discuss is about how data should be shared and used. While several other areas of the best practices are vague, they are more explicit in regards to how this data may be used. Specifi-cally, data collected by drones cannot be used (without consent) to determine “employment eligibility, promo-tion, or retention; credit eligibility; health care treat-ment eligibility.” The exception to this is if it is permitted by another regulatory framework. This is a logical and necessary exception, but given the patchwork nature of US laws and regulations, finding out what other laws might apply is not always the easiest task in practice.

Page 26: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

24 ComputingEdge December 201978 IEEE Security & Privacy September/October 2018

DRONES

Apart from those specific areas, the operator should not use or share the information outside of circum-stances specified in the privacy policy. Clearly, opera-tors need to comply with what they have said they are going to do; to do otherwise would be misleading. The reference is therefore unobjectionable, and is possibly a useful reminder for the operator, but it does not provide any further privacy protection. As mentioned earlier, relying on a privacy policy for any type of consent is not in the best interest of the individual because of elevated language typically used and the likelihood of someone reading it is not great.8,10

As for the public release of the data collected, opera-tors are advised to not do so unless necessary. If they are required to publicly release information, they should take steps to anonymize the data. As stated earlier, this is not a good measure of privacy protection.

Interaction between FAA Regulations and NTIA Best PracticesAs briefly mentioned earlier, the privacy guidelines were released by the NTIA, not the FAA. This is because the FAA does not have the authority to regulate activities related to the collection and handling of data. Whether it can issue nonbinding guidance is perhaps moot: it does not appear that it considers the issuing of privacy guidance as part of its core role. Instead, this task falls to the NTIA, which released the guidelines after consult-ing with multiple stakeholders concerning the opera-tion of drones.

The fact that two different agencies are regulating and advising on the topic has some implications for privacy in the use of drones. It is not necessarily obvious to oper-ators (particularly model aircraft users, but even small business operators) where to look for the rules or advice. It may also not be obvious what the status of documents produced by each body may be. Confusion about what is a required rule and what is voluntary best practice is not helpful, and can raise risks of noncompliance or of risk aversion and excessive compliance costs. Even in the NTIA guidelines, there is some confusion. The only sec-tion of the privacy guidelines that directly refers to the FAA is section 2c, which states (emphasis ours):

Where it will not impede the purpose for which the UAS is used or conflict with FAA guidelines UAS operators should make a reasonable effort to minimize UAS opera-tions over or within private property without consent of the property owner or without appropriate legal authority.

Yet, as we know, the FAA has imposed regulations on commercial and model aircraft operators—not guide-lines. Those regulations have the force of law and must be followed, or the operator will suffer consequences.

However, more positively, several of the FAA regula-tions reduce some of the concerns raised earlier about protections in the privacy best practices. First, the pri-vacy guidelines state that the operator must have in place some way for individuals to file complaints or make requests that their data be deleted. There is a con-cern that it may not be easy for an individual to find this particular information. It should be in the privacy policy, but unless the individual knows who the drone operator is, this may be of little use. However, an FAA regulation offers a slight bit of help by requiring every drone over 0.55 lbs to be registered. If this registration information is publicly accessible, and one can visually identify the registration on the drone from a distance, then it would be possible for the individual to look up the operator from the registration. However, we note that that is a lot of “ifs”: the chances of everything lining up are not particularly high.

Second, the drone must always be within visual distance to the operator (without aided vision, for instance, binoculars, scope). Although this does not stop operators from hiding from those observed, it can provide more of an opportunity for individuals to con-front them to file a complaint than if they were a few miles away from the drone. As we noted earlier, though, technological improvements make it possible to obtain clear images of individuals while the drone is a long way from the individual. The line of sight rule is important for safety, as it makes it more likely that the UAV will be flown competently, but it is less useful for privacy. The increasing number of near-misses between drones and piloted aircraft, and the difficulties that the authorities experience with tracking the drone operators respon-sible, make it plain that the ordinary individual faces an uphill struggle with identifying operators, finding out whether a privacy violation has occurred, and holding the operator to account if it has.

Policy RecommendationsThe privacy guidelines released by NTIA follow in the tradition of US privacy policy in that there is an expec-tation of industry self-regulation. However, if the web platforms are any indication of what self-regulation looks like, it typically does very little to protect individu-als. Research shows that industry self-regulation at this time has no impact on preventing data breaches. In addi-tion, there has been a proven track record of noncom-pliance with companies registered with self-regulatory programs. However, there is no clear indication how this noncompliance is addressed.14 At the very least, the guidelines also need an amendment to enhance the pri-vacy protection that they offer while also recognizing the significant benefits that drone technology provides for both commercial enterprises and model aircraft users.

Page 27: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 25www.computer.org/security 79

The effectiveness of voluntary guidelines will always be limited, however, and it is important to eliminate the confusion that currently exists between what is law and what is optional. But the biggest weakness of self-regulation is the lack of an enforcement mechanism. Individuals have no guarantee that drone operators will follow industry best practices, and no clear action is specified if it is determined that these best practices have been violated. This concern has been echoed by both potential bystanders to drone operation and drone operators.15 We therefore believe that it is better for the US to adopt and enforce privacy regulation concerning the use of drones. The regulation should cover many of the matters in the guidelines, including development of accessible privacy policies, the need for consent where practicable, limits on when personal information can be collected (so that situations where consent is not possible are governed), limitations on use and disclo-sure, and rules about retention. Any privacy protection mechanism should consider the privacy of bystanders as well as drone operators because some potential meth-ods for enforcement could impact the privacy of drone operators by subjecting them to data collection.15 Fed-eral regulation is recommended to ensure that all indi-viduals who reside in the US have a minimum standard of privacy protection and for the purposes of clarity for drone operators. As with other privacy regulation today, state legislators still have the option to enact stronger privacy protections as they deem necessary.

These recommendations are more in line with how European law deals with privacy but are still workable in the US context. Even so, there are potential barriers that could significantly impact the implementation of our policy recommendations. First, the political climate of the US makes it increasingly difficult to pass legisla-tion at the federal level. Maintaining consistent regu-lation outside of legislation is just as difficult because many regulatory agencies are dependent on Presidential appointments for their positions. As can be seen with the Federal Communications Commission’s attempt to reverse the net neutrality policies introduced in 2014, regulation could shift depending on which party holds power within the regulatory agency. Second, harming innovation is frequently cited as a barrier to privacy protections. We recognize that our policy recommenda-tions could potentially harm smaller drone operators. However, this is a potential harm with any regulation that is introduced. In this situation, we believe that that benefits outweigh the costs.

I t is our view that appropriate and proportionate regulation of drone use to protect personal privacy

is becoming critically important as drones continue to

proliferate and their commercial and recreational uses become more varied. The pressure on airspace and the potential for intrusions into personal space are only going to grow greater.

UAVs represent a significant opportunity for devel-oping new business processes and services, and it is important not to stifle innovation. However, the ability to innovate will be undermined if drones are operated in an untrustworthy manner. Considering regulation through a privacy lens as well as a safety lens enables us to gain the benefits from UAVs while avoiding the worst of their potential harms.

The current combination of the FAA regulations and NTIA guidelines do not go far enough to provide clar-ity and certainty for users or for the public who may be the subjects of either deliberate or accidental drone sur-veillance. More needs to be done in both fields, but it does not seem likely that anything short of regulation—where there are consequences for failure to comply—will be effective enough to drive safer and more trusted behavior in this fast-moving environment.

References1. J. Garamone, “From the U.S. Civil to Afghanistan: A Short

History of UAVs,” Army Communicator, vol. 27, no. 2, 2002, pp. 63–64.

2. J. Garamone, “Unmanned Aerial Vehicles Proving Their Worth over Afghanistan,” Army Communicator, vol. 27, no. 2, 2002, pp. 62–63.

3. A. Selyukh, “FAA Expects 600,000 Commercial Drones in the Air within a Year,” NPR, 29 Aug. 2016; http://www.npr.org/sections/thetwo-way/2016/08/29/491818988 /faa-expects-600-000-commercial-drones-in-the-air -within-a-year.

4. “Drone Dollar Sales for the Past 12 Months Were Three Times Higher than Sales from Prior Year,” NPD Group 25 May 2016; https://www.npd.com/wps/portal/npd/us /news/press-releases/2016/year-over-year-drone -revenue-soars-according-to-npd.

5. J. Cichowski and A. Czyzewski, “Reversible Video Stream Anonymization for Video Surveillance Systems Based on Pixels Relocation and Watermarking,” IEEE Interna-tional Conference on Computer Vision Workshops, 2011; doi:10.1109/ICCVW.2011.6130490

6. A. Peterson and M. McFarland, “You May Be Powerless to Stop a Drone from Hovering over Your Own Yard,” Washington Post, 13 Jan. 2016; https://www.washington-post.com/news/the-switch/wp/2016/01/13/you-may-be-powerless-to-stop-a-drone-from-hovering-over-your -own-yard/?noredirect5on&utm_term5.dc65c501bb44.

7. V. Chang, P. Chundury, and M. Chetty, “Spiders in the Sky: User Perceptions of Drones, Privacy, and Security,” Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 6765–6776.

Page 28: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

26 ComputingEdge December 201980 IEEE Security & Privacy September/October 2018

DRONES

8. I. Pollach, “A Typology of Communicative Strategies in Online Privacy Policies: Ethics, Power and Informed Consent,” Journal of Business Ethics, vol. 62, no. 3, 2005, pp. 221–235.

9. S. Winkler and S. Zeadally, “Privacy Policy Analysis of Popular Web Platforms,” IEEE Technology & Society Mag-azine, vol. 35, no. 2, 2016, pp. 75–85.

10. R. Malaga, “Do Web Privacy Policies Still Matter?,” Acad-emy of Information & Management Sciences Journal, vol. 17, no. 1, 2014, pp. 95–99.

11. D.L. Rosen et al., “Comparing HIV Case Detection in Prison during Opt-In vs. Opt-Out Testing Policies,” Jour-nal of Acquired Immune Deficiency Syndromes, vol. 71, no. 3, 2016, pp. 85–88.

12. P.J.S. Vega et al., “Single Sample Face Recognition from Video via Stacked Supervised Auto-Encoder,” Proceedings of the 29th SIBGRAPI Conference on Graphics, Patterns and Images, 2016, pp. 96–103.

13. J. Unnikrishnan and F. Naini, “De-anonymizing Private Data by Matching Statistics,” 51st Annual Allerton Con-ference on Communication, Control, & Computing, 2013; doi:10.1109/Allerton.2013.6736722.

14. S. Listokin, “Industry Self-Regulation of Consumer Data Privacy and Security,” John Marshall Journal of Information Technology & Privacy Law, vol. 32, no. 1, 2015, pp. 15–32.

15. Y. Yao et al., “Privacy Mechanisms for Drones: Percep-tions of Drone Controllers and Bystanders,” Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 6777–6788.

Stephanie Winkler is a PhD candidate at The Pennsylva-nia State University in Information Sciences and Tech-nology. Her current research interests include privacy, security, human–computer interaction, and fake news and misinformation. She received a master’s in com-munication from the University of Kentucky. Contact her at [email protected].

Sherali Zeadally is an associate professor in the College of Communication and Information at the University of Kentucky. His research interests include cyberse-curity, the Internet of Things, privacy, and computer networks. Zeadally received a doctoral degree in com-puter science from the University of Buckingham, England. He’s a Fellow of the British Computer Soci-ety and the Institution of Engineering Technology, England. Contact him at [email protected].

Katrine Evans is a privacy lawyer in Wellington, New Zealand, and is currently a senior associate at Hayman Lawyers. Before going into private legal practice, Evans was Assistant Privacy Commissioner for more than a decade, includ-ing acting as the Commissioner’s general counsel and managing the policy and technology team. Contact her at [email protected].

IEEE Pervasive Computing

seeks accessible, useful papers on the latest peer-

reviewed developments in pervasive, mobile, and

ubiquitous computing. Topics include hardware

technology, software infrastructure, real-world

sensing and interaction, human-computer

interaction, and systems considerations, including

deployment, scalability, security, and privacy.

Author guidelines:

www.computer.org/mc/pervasive/author.htm

Call for Articles

Further

details:

pervasive@

computer.org

www.

computer.

org/pervasiveThis article originally appeared in IEEE Security & Privacy, vol. 16, no. 5, 2018.

Page 29: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

2469-7087/19 © 2019 IEEE Published by the IEEE Computer Society December 2019 27

The Robotization ofExtreme Automation: TheBalance Between Fearand Courage

In our last column,1 we haveillustrated how FabLabsnetwork can play animportant role in shaping thefuture of manufacturing andautomation. The focus wason people taking acollaborative role to designand build the mostextraordinary things usingtools and techniques that areavailable at FabLabs. In thisdirection, FabLabs is a

platform of a community of makers who collaborate and learn fromeach other to build highly specialized products, rather than settle forthe legacy mass production products. In this column, however, weare shifting our focus on the use of robots for extreme automation.Actually, “none of humanity’s creations inspires such a confusingmix of awe, admiration, and fear as those associated with robots.We want robots to make our lives easier and safer, yet we can’tquite bring ourselves to trust them. We’re crafting them in our ownimage, yet we are terrified they’ll supplant us”.2 The fears from theoutcome of using robots reached some kind of far end asapocalyptic, resembling the biblical Apocalypse or the end ofworld.3 These reactions describe the wind of change related toautomation and manufacturing as many prominent people areasking important “what if” questions like what happens if a newtechnology based on robotics causes millions to lose their jobs in a short period of time, or what ifmost companies simply no longer need many human workers? Bill Gates, as well as many politicians,believes that governments should tax companies for using robots instead of people as a way tocompensate the loss in employment and for retraining.4 Others like legislators and lawmakers aretrying to mature a robot law.5 Actually, the notion that workers’ skills can suddenly become obsolete

Jinan FiaidhiSabah MohammedLakehead University

Sami MohammedVictoria University

Editor:Jinan Fiaidhi, LakeheadUniversity;[email protected]

The line between industrial

robot tasks and collaborative

robot tasks is beginning to

blur, and collaborative robots

may be poised to take

market share from industrial

robots. With the added

capabilities of working with

human safely, as well as

being smart to achieve

common goals, cobots and

microbots are changing our

future.

DEPARTMENT: EXTREME AUTOMATION

IT ProfessionalNovember/December 2018 87

Published by the IEEE Computer Society1520-9202/18/$33.00 �2018 IEEE

Page 30: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

28 ComputingEdge December 2019

highlights a lively debate over how much and how fast technology will take over the workplace, andthere’s concern about whether the governmental institutions and social safety nets are equipped to dealwith this increasing problem. These kinds of fears and concerns are growing in pace and scale, whichis likely to be highly disruptive, and so, the growing anxiety cannot be easily wished away. Thus,many policies have begun to circulate, where some are still controversial, like universal basic income,negative income tax, and the government job guarantee among others. The fact that we have got a lotof jobs including new types of jobs is a good thing and automation is proved to be beneficial ingeneral.6 We cannot surrender to our fears, and we have to have the courage to make such balanceddecisions in supporting robotization, as well as dealing with the cost associated with this newecosystem. As Winston Churchill said, “Fear is a reaction. Courage is a decision.”We need to makedecisions to move forward boldly by rebooting our thinking on automation, taking the stuff from thepast that still works, and remix it for this new world. The most important thing that we need to keep inmind is that as robots and machines get smarter and smarter, it becomes more important that theirgoals, what they are trying to achieve with their decisions, are closely aligned with human values. Weneed also to learn how to work alongside the robots that had replaced many of our jobs. Actually,robots have contributed to the new technological initiative, which is called Industry 4.0 or the artificialintelligence (AI) revolution, as they are providing the arm for implementing the smart applications thatinteract with the physical world. This means that by implication the AI paradigm is based on twoworlds (virtual and physical), as Figure 1 illustrates, where the robotics is an important player of the AIphysical world.

ROBOTS FOR EXTREME AUTOMATIONRobots are coming as the new workforce of manufacturing to work alongside humans. They aremainly classified as industrial robots and typically they are sort of articulated arms featuring six axes ofmotion (6 degrees of freedom). This design allows for the maximum flexibility. There are six maintypes of industrial robots: Cartesian, SCARA, Cylindrical, Delta, Polar, and Vertically Articulated.However, there are several additional types of robot configurations. Each of these types offers adifferent joint configuration. The joints in the arm are referred to as axes. Typical applications ofindustrial robots include welding, painting, assembly, pick and place for printed circuit boards,packaging and labeling, palletizing, product inspection, and testing, all accomplished with highendurance, speed, and precision.

Figure 1. Robots as part of the physical AI world.

IT PROFESSIONAL

November/December 2018 88 www.computer.org/itpro

Page 31: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 29

Robot adoption in the manufacturing industry increased by an annual global average of 12% between2011 and 2016, concentrated heavily in the automotive and electronic/electrical manufacturing sectors.According to the International Federation of Robotics, more than 3 million industrial robots will be inuse in factories around the world by 2020.7 Global sales of industrial robots reached the new record of387 000 units in 2017. That is an increase of 31 percent compared to the previous year (2016: 294 300units). China saw the largest growth in demand for industrial robots: up 58 percent. Sales in the USAincreased by 6%—in Germany by 8% compared to the previous year. These are the initial findings ofthe World Robotics Report 2018.8

However, the customer demand for greater product variety is driving an increase in low-volume, high-mix manufacturing. Industrial robots, although they can be programmed to work for different settings,they cannot adapt and rapidly repurpose their production facilities in the same demanding speed. Theneed for smart factories, in which machines are digitally linked, is ever growing with the new notion ofIndustry 4.0.9 Smart factories create a supply chain that shortens product development time, repurposequickly, reduce product defects, and cut machine downtime. The smart factory ecosystem is used todescribe the seamless communication between all levels of extreme automation to a fully connectedand flexible system—one that can use a constant stream of data from connected operations andproduction systems to learn and adapt to new demands. Central to the issue of such ecosystem is thenotion of “smart manufacturing units (SMU),” which signifies the opportunity to drive greater valueboth within the four walls of the factory and across the supply network. Every SMU is a flexiblesystem that can self-optimize performance across a broader network, self-adapt to and learn from newconditions in real or near-real time, and autonomously run entire production processes. SMUs canoperate within the four walls of the factory, but they can also connect to a global network of similarproduction systems and even to the digital supply network more broadly. SMU enables all informationabout the manufacturing process to be available when it is needed, where it is needed, and in the formthat it is needed across entire manufacturing supply chains, complete product lifecycles, multipleindustries, and small, medium, and large enterprises. In fact, the new advances in collaborative robotsand assistive technologies such as exoskeletons expand the scope of tasks robots can perform insupport of the SMUs integration and adaptivity. Other significant advances are also adding morepositive effects in the era of smart factories like “Cloud Robotics,” “Robot as Service,” and Microbots.In the next section, we shed some light on some of these advances that have an important effect inincreasing the uptake of robots for extreme automation. However, we may address some of theseissues in our next columns as they relate to other aspects of the extreme automation initiative.

Advanced Low-Cost Robotics for ManufacturingThe current industrial robots are still pretty rudimentary—and expensive. They are typically beenlarge, caged devices that perform repetitive, dangerous work in lieu of humans. These robots sufferfrom many problems like the vision issues as robots do not have the ability to identify and navigatearound objects (including people) and dexterity issues as their gripping, maneuvering, and mechanicalcapabilities are still limited. As the digital technology advances and development of autonomousvehicles have driven down costs of off-the-shelf hardware, smaller, more dexterous robots have comeonto the factory floor. Actually, low-cost robot designs poured in from around the world in response tomany initiatives around the world like the Kickstarter campaign (www.kickstarter.com) advocating fora basic low-cost industrial arm for nontraditional markets like Fablabs. These lighter weight, lowercost robots can be outfitted with sensors that allow them to work collaboratively alongside humans inindustrial settings, creating “cobots” or “FabLab robots”—robots that can perform tasks like grippingsmall objects, seeing, and even learning to tackle “edge cases.” Niryo One (https://niryo.com/) is oneexample from the Kickstarter campaign where the low-cost industrial robot is a six-axis robotic arm,made for makers, education, and small factories. The robot is 3-D printed and powered by Arduino,Raspberry Pi, and Robot Operating System. STL files and code on the robot are open source. NiryoOne is also the first industrial robot that can be connected to your home and can be accessed and con-trolled via the Internet. The introductory price of Niryo One is less than $1000. There are other exam-ples like the Franka Emika (www.franka.de): a rather remarkable cobot arm. It is designed to be easyto set up and program and can operate right next to people, assisting them with tasks without posing arisk and it can build copies of itself. Now, all the big industrial robot makers are trying to develop their

EXTREME AUTOMATION

November/December 2018 89 www.computer.org/itpro

Page 32: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

30 ComputingEdge December 2019

own cobots, but the most innovative designs have come from startups like the Rethink Robotics whopioneered the Baxter dual-arm robot in 2012 and, later, the single-arm robot called Sawyer. Bothcobots are simple to use as their Intera software platform provides a train by demonstration experienceunlike any other cobot. You can train these cobots by simply moving their arm and demonstrating themovements. With the introduction of Intera 5, Rethink Robotics has created the world’s first smartrobot that can orchestrate the entire work cell, removing areas of friction and opening up new andaffordable automation possibilities for manufacturers around the world. Intera 5 is driving immediatevalue while helping customers work toward a smart factory and providing a gateway to successfulindustrial internet of things (IIoT) for the first time.10

Collaborative robots are now defined by ISO 10218, which defines five main collaborative robotfeatures: safety monitored stop, speed and separation monitoring, power and force limiting, handguiding, and risk assessment.11

1. Power and force limitingRobot force is limited through electrical or mechanical means.

2. Safety monitored stopRobot stops when the user enters the area and continues when they exit.

3. Speed and position monitoringRobot slows down when the user nears and stops if the user gets too close.

4. Hand guidingThe user is in direct contact with the robot while they are guiding and training it.

5. Risk assessmentRisk assessment of the application should be done to determine all possible risks and properdevices and procedures to mitigate the risk should be implemented.

Many cobots were successful in following these guidelines and achieving the highest sales amongmanufacturing companies like the Universal Robots cobots (https://www.universal-robots.com/).

Robotization Trend: From Cobots to Smart MicrobotsThere are many other low-cost industrial cobots that you can find by a simple search like KINOVAUltra lightweight robotic arm (www.kinovarobotics.com), which is weighing in at just 4.4 kg and givethe workers a performance, flexibility, and ease all in on with plug-and-play option developed withopen architecture and compatible with ROS. However, the main challenge with cobots is not just thehardware and following the ISO 10218 standard but also the software to make it easily accessible tononexperts and smart enough to collaborate with other human coworkers and cobots. What it doesmean is that cobots need to be equipped with sensors, smart technologies, and algorithms that arelinked with the IoT and/or specific systems like the vision system. Vision systems allowing robots toidentify and safely navigate around objects were largely an afterthought. In recent years, vision hard-ware (such as lidar) has become much cheaper, more effective, and subsequently more widespread.Lidar works much like radar, but instead of sending out radio waves it emits pulses of infrared light—aka lasers invisible to the human eye—and measures how long they take to come back after hittingnearby objects. It does this millions of times a second, and then compiles the results into a so-calledpoint cloud, which works like a 3-D map of the world in real time—a map so detailed it can be usednot just to spot objects but to identify them. Once it can identify objects, the cobot can predict how itwill behave, and thus how it should move. A good example of such cobot is the CES2018 for its ping-pong playing perfection (https://www.cbinsights.com/company/omron). However, the capability ofsuch cobot in distinguishing what an object is and deciding how to move it is still a challenge or bot-tleneck. What is needed is a cobot that recognizes people’s movements and gestures as well as theobjects it is surrounded by including other cobots. A cobot allows real-time mapping of the workspace,in this way being able to adapt the movements of the robots to which it is connected. Smart cobotsaccording to DARPA12 need to be a small object of moderate weight capable of instantaneously recog-nizing and interpreting everything that is happening in the workspace, enabling the cobot to becomemore flexible and to react to sudden changes in the work process. DARPA announced a new programcalled short-range independent microrobotic platforms (SHRIMP). The goal is “to develop anddemonstrate multifunctional micro-to-milli robotic platforms for use in natural and critical disaster

IT PROFESSIONAL

November/December 2018 90 www.computer.org/itpro

Page 33: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 31

scenarios.” To enable robots that are both tiny and useful, SHRIMP will support fundamental researchin the component parts that are the most difficult to engineer, including actuators, mobility systems,and power storage. To help overcome the challenges of creating such smart microbots, the firstrequired stage is to enforce such microbots with optimized size, weight, and power or what is knownas SWaP, which are just some of the constraints that very small robots operate under. One of the veryfirst microbot was a “push button” that can push nearly any mechanical button and makes such dumbbuttons smart through controlling it via your smartphone or a tablet from any distance through WiFiconnection. The push button microbot can turn ON any device like the coffee maker at a scheduled timeor it can be controlled wirelessly.

The next expected stage is to provide such microbots with smart algorithms to enable their workand collaboration toward achieving common goals. For example, at the University of California,San Diego, a research laboratory managed to develop a tiny 3-D-printed robotic fish smaller thanthe width of a human hair that can be used to deliver drugs to specific places in our bodies andsense and remove toxins. These microfishes are self-propelled, magnetically steered, andpowered by hydrogen peroxide nanoparticles.13 Actually, AI and machine learning capabilitieshave been quickly making their way into industrial robotics technology and microbots. In thenever-ending quest to improve productivity, manufacturers are looking to improve on the rigid,inflexible capabilities of legacy industrial robots and make these robots smart, small, andeffective. Good examples on adding such smart capabilities to the existing cobots or microbotsare the FANUC Intelligent Edge Link and Drive IIoTs platform to create an intelligent system ofindustrial collaborating or team robots.14 Kuka and Huawei15 signed a deal to develop whatcould be another global network—built on the IIoTs—to enable the connection of robots acrossmany factories. The companies say they plan to integrate AI and deep learning into the system.Moreover, if the power of the robot team work is combined with a ground-breaking technologylike blockchain, it will make the industrial robotic operations more secure, flexible, autonomous,and even profitable.16 Figure 2 illustrates the trends of using industrial robots starting from thetraditional industrial robots and ending with team robots. The enabling technologies for suchtrend include smart algorithms, internet of everything, mechatronics, and blockchains.

Figure 2. Robotization trends based on the notion of extreme automation.

EXTREME AUTOMATION

November/December 2018 91 www.computer.org/itpro

Page 34: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

32 ComputingEdge December 2019

CONCLUSIONDigitalization and the implementation of extreme automation as inspired by the Industry 4.0 will bringfundamental changes to industrial production and necessitate new products, solutions, and concepts.Robotization is the new reality that is going to change the manufacturing sector as well as our life.Robots are becoming smarter and cost-effective with time. Today a business may own few classicalrobots, but the day is not far when the significant parts of an industry would be managed by the newgeneration of robots. However, the main question that everyone asks is that we want robots to makeour lives easier and safer, yet we do not want them to control our life and cause catastrophic loss ofjobs. The challenge is to strike a balance between our fears and the benefits of such innovativetechnology, which can offer the enhancement to our life quality. This column has touched the surfaceof this exciting topic and we are encouraging you to contribute to this column by writing to the editorof this column on her email ([email protected]).

REFERENCES1. J. Fiaidhi, S. Mohammed, and S. Mohammed, “FabLabs as enabler for extreme automation,”

IEEE IT Prof., vol. 20, no. 6, pp. 83–90, Nov./Dec. 2018.2. M. Simon, The wired guide to robots,” The Wired.Com Blog, May 17, 2018. [Online].

Available: https://www.wired.com/story/wired-guide-to-robots/.3. B. Tarnoff, “Robots won’t just take our jobs—They’ll make the rich even richer,” Guardian,

Mar. 2, 2017. [Online]. Available: https://www.theguardian.com/technology/2017/mar/02/robot-tax-job-elimination-livable-wage.

4. K. J. Delaney, “The robot that takes your job should pay taxes, says Bill Gates,” QuartizBlog, Feb. 17, 2017. [Online]. Available: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/.

5. R. Calo, A. M. Froomkin, and I. Kerr, Robot Law. Cheltenham, U.K.: Edward Elgar, 2016,ISBN: 9781783476725.

6. “United States employment rate 1950-2018,” Trading Economics. [Online]. Available:https://tradingeconomics.com/united-states/employment-rate.

7. “Robots and the workplace of the future,” Int. Fed. Robotics, IFR 2018 Positioning Paper,Mar. 2018. [Online]. Available: https://ifr.org/downloads/papers/IFR_Robots_and_the_Workplace_of_the_Future_Positioning_Paper.pdf.

8. S. Crowe, “Industrial robot sales increase 29% worldwide,” IFR Rep., Jun. 20, 2018.[Online]. Available: https://www.therobotreport.com/industrial-robot-sales-29-worldwide/.

9. Jinan IT-Pro 1.10. K. McSweeney, “Rethink Robotics rethinks its software,” ZDNet Blog, Feb. 8, 2017.

[Online]. Available: https://www.zdnet.com/article/rethink-robotics-rethinks-its-software/#ftag ¼ CAD-00-10aag7e.

11. “Collaborative manufacturing: All together now,” The Economist Special Report,Apr. 21, 2012. [Online]. Available: https://www.economist.com/special-report/2012/04/21/all-together-now.

12. IEEE Spectrum, “DARPAwants your insect-scale robots for a micro-olympics,” Jul. 18, 2018.[Online]. Available: https://spectrum.ieee.org/automaton/robotics/robotics-hardware/darpa-wants-your-insect-scale-robots-for-a-micro-olympics?utm_source¼roboticsnews&utm_campaign¼roboticsnews-07-31-18&utm_medium¼email.

13. R. Moss, “3D-printed microscopic fish could be forerunners to smart ‘microbots,’” NewAtlas, Aug. 26, 2015. [Online]. Available: https://newatlas.com/3d-printed-microscopic-fish-smart-microbots/39106/.

14. “Field systems,” Fanuc 2018. [Online]. Available: https://www.fanuc.co.jp/en/product/catalog/pdf/field/FIELDsystem(E)-01.pdf.

15. S. Francis, “Kuka to build global deep learning AI network for industrial robots,” RoboticsAutomat. News, Mar. 19, 2016. [Online]. Available: https://roboticsandautomationnews.com/2016/03/19/kuka-to-build-global-iot-network-and-deep-learning-ai-system-for-industrial-robots/3570/.

16. M. Aggarwal, “Blockchain in robotics—A sneak peek into the future,” Medium.com,Mar. 18, 2018. [Online]. Available: https://medium.com/@manuj.aggarwal/blockchain-in-robotics-a-sneak-peek-into-the-future-4e115ccf4931.

IT PROFESSIONAL

November/December 2018 92 www.computer.org/itpro

Page 35: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 33

ABOUT THE AUTHORS

Jinan Fiaidhi is a Full Professor and the Professional Engineer with the Department ofComputer Science at Lakehead University. She is the Founder of Smart Health FabLaband an Adjunct Research Professor with the University of Western Ontario. She is alsothe Editor-in-Chief of the new IGI Global International Journal of Extreme Automation inHealthcare. Moreover, she is a Department Chair with the IT-Pro and the Chair of BigData for eHealth with IEEE ComSoc. Contact her at [email protected].

Sabah Mohammed is a Full Professor and a Professional Engineer with the Departmentof Computer Science and the Co-Founder of the Smart Health FabLab at LakeheadUniversity. He is also an Adjunct Professor with the University of Western Ontario. He isthe Chair of Smart and Connected Health with IEEE ComSoc. Contact him [email protected].

Sami Mohammed is a Graduate Student with the Department of Computer Science,Victoria University. Contact him at [email protected].

November/December 2018 93 www.computer.org/itpro

EXTREME AUTOMATION

This article originally appeared in IT Professional, vol. 20, no. 6, 2018.

Advertising Coordinator

Debbie SimsEmail: [email protected]: +1 714-816-2138 | Fax: +1 714-821-4010

Advertising Sales Contacts

Mid-Atlantic US:Dawn ScodaEmail: [email protected]: +1 732-772-0160Cell: +1 732-685-6068 | Fax: +1 732-772-0164

Southwest US, California:Mike HughesEmail: [email protected]: +1 805-208-5882

Northeast, Europe, the Middle East and Africa:David Schissler Email: [email protected]: +1 508-394-4026

Central US, Northwest US, Southeast US, Asia/Pacific:Eric Kincaid Email: [email protected]: +1 214-553-8513 | Fax: +1 888-886-8599Cell: +1 214-673-3742

Midwest US: Dave JonesEmail: [email protected]: +1 708-442-5633 Fax: +1 888-886-8599Cell: +1 708-624-9901

Jobs Board (West Coast and Asia), Classified Line Ads

Heather BounadiesEmail: [email protected]: +1 623-233-6575

Jobs Board (East Coast and Europe), SE Radio Podcast

Marie ThompsonEmail: [email protected]: +1 714-813-5094

ADVERTISER INFORMATION

Page 36: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

34 December 2019 Published by the IEEE Computer Society 2469-7087/19 © 2019 IEEE66 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 9 © 2 0 1 9 I E E E

OUT OF BAND

Just as the shifts changed at Jaguars, a noted Las Vegas “gentleman’s club,” one afternoon in May 2003, FBI agents burst into the manager’s offi ce, guns in hand. So, according to the L.A. Times,1 be-

gan Operation G-Sting, the federal sting that convicted four Clark County (Nevada) Commission members and two San Diego, California, city councilmen on bribery charges.

Flash forward to March 2010. A disgruntled former em-ployee of the Texas Auto Center (TAC) in Austin remotely triggers the installed aftermarket GPS devices to set off car alarms, activate headlights, and shut off the engine starting systems for roughly 100 of TAC’s customers’ per-sonal vehicles, leaving them stranded with disabled or un-usable automobiles.2

What does the conviction of four county commission-ers from Las Vegas have to do with the TAC hack? The answer is vehicle telematics, one of the emerging digital

threat vectors in use by hackers, criminals, terrorists, governments, and invasive businesses.

FAITH-BASED SECURITYThe TAC (w w w.texa sautocenter.com/) is apparently one of the car dealers of last resort for those who

are credit challenged. These car centers have been a sta-ple in poor communities for many years, specializing in high-interest and no income, no job, no assets (NINJA) loans. The trick to making such loans profi table is the ability to recover the car if the payments are in arrears. In years past, this was the purview of the repo man. Now, we throw new wave repo technology at the problem in the form of a digital, remotely operated “real-world asset pro-tection” system like Payteck (http://www.payteck.cc/).

TAC used Payteck’s GPS and starter-interrupt systems for asset management (i.e., theft reduction)—apparently a winning combo for fi nance companies that specialize in NINJA car loans. Unlike the GPS trackers used formerly, Payteck’s system enables fi nance and used car companies to both locate and disable the car if payments became de-linquent. This sounds fi ne in practice, but what if a dis-gruntled former employee of TAC used a coworker’s login credentials to go rogue on the unsuspecting used car buy-ers—which is exactly what happened when that former employee disabled roughly 100 recently sold vehicles as an

Digital Object Identifier 10.1109/MC.2019.2891334Date of publication: 11 March 2019

Vehicle Telematics: The Good, Bad, and Ugly Hal Berghel, University of Nevada, Las Vegas

Vehicle telematics may be thought of as an

Internet of Things (IoT) on wheels. And just as

with the IoT, the technology is a mixed blessing,

with serious privacy and security implications.

Page 37: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 35J A N U A R Y 2 0 1 9 67

EDITOR HAL BERGHEL University of Nevada, Las Vegas; [email protected]

act of revenge against TAC. This reads like a bad NSA surveillance expose: one has to ask, “Where were the checks and balances?” Apparently, the Pay-teck and TAC folks assumed that theft or unauthorized elevation of authenti-cation privilege could never happen to them and that it would be impossible for an employee or hacker to behave improperly. This is an example of what I have called faith-based security (FBS),3

a cousin of security through obscurity (STO). If such strategies are eff ective, it’s by accident rather than design.

Even if you are fi nancially well heeled, your cars aren’t immune to FBS and STO measures. Don’t get lulled into complacency because you avoid NINJA fi nancing. One may accomplish the same objective with any car through the internal computer system—even by hacking the audio system.4 I will re-turn to this theme.

Operation G-Sting was a hack of a diff erent color; this time, it was the Feds who took advantage of the vehi-cle telematics system. The convicted bagman, the head of the Clark County Commission and a former cop, decided that to best avoid government eaves-dropping, he’d conduct all sensitive discussions regarding the bribing of elected offi cials in his car, which just happened to have OnStar enabled. Much to his chagrin, the preinstalled General Motors’ OnStar folks were all too willing to activate the microphone for the FBI, thereby allowing the latter to listen in on conversations in the ve-hicle (all without benefi t of warrant). These recorded conversations pro-vided the key evidence for the convic-tions. In one of life’s little ironies, after the county commissioners had been released from prison, the Ninth Circuit Court (which includes both California and Nevada) ruled that such OnStar spying was illegal because it required tampering or disabling the OnStar ve-hicle recovery mode, which violates

the customer’s terms of service.5 That is, the Ninth Circuit Court ruled that OnStar wiretapping and surveillance represented an egregious violation of a corporate term of service under cur-rent law (http://www.law.cornell.edu/uscode/text/18/2518)—but not that it in any way violated the customer/citizen’s expectation of privacy!

In a bizarre twist, the 2011 OnStar revised terms of service extended On-Star’s promised focus on continuous vehicle recovery mode and specifi cally allowed OnStar to collect driving and location data from car owners even if they had cancelled their OnStar sub-scriptions. This produced a public re-lations nightmare for OnStar, which, temporarily at least, stopped this practice6 at the behest of former Sen-ator Al Franken (D, Minnesota) and Senator Chris Coons (D, Delaware). OnStar has since resumed the practice of collecting any information, for any purpose, at any time (https://www.onstar.com/content/tcps/us/20180227/privacy_statement.html).

These prosecutions were interest-ing from several perspectives. One of the two San Diego city councilmen was exonerated in 2010 (http://www.sddt.com/News/article.cfm?SourceCode=20101014tza). The convicted for-mer politicians who made up the Las Vegas contingent of bribe recipients have apparently set aside their polit-ical aspirations for the moment and directed their attention to less visible vocations in public relations, market-ing, and the law.7

BIG BROTHER TELEMATICSVehicular telematics is but one of the later instantiations of Orwellian digital dystopia, but with its own distinctive twists including the increased expo-sure to malicious hacking and the po-tential for abuse of individual privacy.

As with other innovative technolo-gies, modern vehicular telematics is

a mixed blessing. There is no doubt that some telematics associated with con-venience, safety, mechanical reliabil-ity, and entertainment are welcomed by many consumers and to varying de-grees. With my latest vehicle, I most ap-preciate features like forward collision alert, 360° surround vision, distance indication, front pedestrian braking, cross traffi c alerts, active cruise con-trol, lane-keeping assistance, park-ing sensors, blind-spot monitoring, navigation systems with traffi c alert, adaptive lighting, and a host of other warnings and driver assistance fea-tures. I’m confi dent the roads would be safer if such features were available on all modern vehicles, and I’m pleased to see that some car manufacturers like Subaru and Toyota now include most of these in their base models.

No matter how useful, these tele -matics features are the least interest-ing from the point of view of security and privacy. The more intriguing fea-tures are those that entail security and privacy vulnerabilities. I’ll begin with a convenience feature that largely goes unnoticed these days: the vehi-cle remote, also called the wireless fob, which is used in lieu of a key to control access to a vehicle or remotely initiate some action on the vehicle (e.g., re-mote start). Originally designed about 40 years ago for remote keyless entry, fobs are functionally similar to more feature-rich mobile devices.

Used as a substitute for the keypad on the driver’s door, the fob is a short-range radio-frequency (RF) transceiver. In my case, the fob passively exchanges proximity information with the car so that, when it is within a few meters of the car, a logo is projected on the ground where a sensor detects motion, opens the rear hatch, turns on various lights, and activates the opening but-tons on the door handles. Additionally, push buttons on the fob enable it to communicate instructions to the car

66 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 9 © 2 0 1 9 I E E E

OUT OF BAND

Just as the shifts changed at Jaguars, a noted Las Vegas “gentleman’s club,” one afternoon in May 2003, FBI agents burst into the manager’s offi ce, guns in hand. So, according to the L.A. Times,1 be-

gan Operation G-Sting, the federal sting that convicted four Clark County (Nevada) Commission members and two San Diego, California, city councilmen on bribery charges.

Flash forward to March 2010. A disgruntled former em-ployee of the Texas Auto Center (TAC) in Austin remotely triggers the installed aftermarket GPS devices to set off car alarms, activate headlights, and shut off the engine starting systems for roughly 100 of TAC’s customers’ per-sonal vehicles, leaving them stranded with disabled or un-usable automobiles.2

What does the conviction of four county commission-ers from Las Vegas have to do with the TAC hack? The answer is vehicle telematics, one of the emerging digital

threat vectors in use by hackers, criminals, terrorists, governments, and invasive businesses.

FAITH-BASED SECURITYThe TAC (w w w.texa sautocenter.com/) is apparently one of the car dealers of last resort for those who

are credit challenged. These car centers have been a sta-ple in poor communities for many years, specializing in high-interest and no income, no job, no assets (NINJA) loans. The trick to making such loans profi table is the ability to recover the car if the payments are in arrears. In years past, this was the purview of the repo man. Now, we throw new wave repo technology at the problem in the form of a digital, remotely operated “real-world asset pro-tection” system like Payteck (http://www.payteck.cc/).

TAC used Payteck’s GPS and starter-interrupt systems for asset management (i.e., theft reduction)—apparently a winning combo for fi nance companies that specialize in NINJA car loans. Unlike the GPS trackers used formerly, Payteck’s system enables fi nance and used car companies to both locate and disable the car if payments became de-linquent. This sounds fi ne in practice, but what if a dis-gruntled former employee of TAC used a coworker’s login credentials to go rogue on the unsuspecting used car buy-ers—which is exactly what happened when that former employee disabled roughly 100 recently sold vehicles as an

Digital Object Identifier 10.1109/MC.2019.2891334Date of publication: 11 March 2019

Vehicle Telematics: The Good, Bad, and Ugly Hal Berghel, University of Nevada, Las Vegas

Vehicle telematics may be thought of as an

Internet of Things (IoT) on wheels. And just as

with the IoT, the technology is a mixed blessing,

with serious privacy and security implications.

Page 38: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

36 ComputingEdge December 201968 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

OUT OF BAND

to remotely start the engine, lock or unlock the doors, open the rear deck hatch, and set off the car alarm. Many of these features are further configurable. Thus, the modern fob has taken on the role of the modern remote controller as-sociated with multimedia devices.

What’s the problem then? To begin with, RF appliances are never optimal for security-sensitive applications; RF neither respects individual privacy nor obeys property lines. So any communi-cations between car and fob should be viewed as broadcasts throughout the immediate neighborhood. This makes them susceptible to a gamut of hacks, ranging from denial-of-service (e.g., to deny vehicle access) to replay attacks, to name but two.

And this is nothing new. Computer scientist Avi Rubin has been lectur-ing about such vehicle insecurities for many years.8 The Center for Automo-tive Embedded Systems Security at the University of Washington, Seattle, and the University of California, San Diego, (www.autosec.org) has been conduct-ing research on vehicle telematics vul-nerabilities for even longer. One of the center’s classic papers from 20109 refer-ences articles on vehicle vulnerabilities as far back as the early 2000s. In a sub-sequent paper,10 these same research-ers evaluate a cornucopia of attack vec-tors that affect modern automobiles. One of this center’s projects, CarShark, provides work ing demonstrations of these vulnerabilities. Researchers there have since extended their work to include using vehicle telematics for driver profiling and fingerprinting. The irony that this research has been covered so extensively over the past 10 years that it has been featured in Popu-lar Science11 should not be overlooked.

Confirmation of these problems isn’t hard to obtain. Samy Kamkar (https://samy.pl) recently developed a suite of such attacks12 and reported the same in a 2015 DEFCON talk.13 His tool, OwnStar, runs a replay attack against OnStar fobs by inserting itself between a GM vehicle’s transceiver and either OnStar apps on mobile devices or the

fobs themselves. His video explains all of this in detail.

While OwnStar targets older RF-based keyless entry systems, modern vehicles use rolling code systems that prevent OwnStar replay attacks. Roll-ing codes use algorithms to generate code sequences based on pseudoran-dom numbers. As long as the vehicle transceiver and the fob/mobile app transceiver use the same seeds and rolling code algorithm, the sequences can be validated even if the codes are nonconsecutive. This means that, while continuously changing, rolling code generators suffer from the serious defect of predictability: once the algo-rithm is known, an endless sequence may be generated, each element of which can be determined to be legiti-mate. Based on this observation, Kam-kar developed RollJam,14 which offers a replay attack for modern RF-based keyless entry systems that use rolling codes. This reaffirms our observation that RF is really not effective when se-curity is important (i.e., if you want to prevent car, boat, or airplane theft; garages from being opened by home invaders; proximity card lock compro-mises; and so on).

A question naturally arises: Why do car companies use digital technol-ogies that are so easily compromised? In this case, challenge-response au-thentication based on a reasonable key derivation function would go a long way toward avoiding replay attacks. Such technology has been well under-stood and successfully deployed for decades, so why isn’t it used for key-less access systems? The answer is that manufacturers’ cost benefit analyses suggest that their legal exposure to the resulting safety deficiencies and secu-rity vulnerabilities from nonuse will not cost them much. In the absence of regulations with teeth or large-scale public blowback, there is little incen-tive to protect the customer. Since the beginning of the industrial revolution, the absence of risk has always been a strong disincentive to serious process improvement where product safety is

at issue. (Note, by the way, that these same vulnerabilities may apply to other remote access systems including remote garage door openers, proxim-ity card access systems, and so on).

But keyless access is not the great-est security and privacy vulnerability; far greater is the new cell phone syn-chronization environment. A decade or so ago, Bluetooth synchronization between a cell phone and the vehicle’s communication systems was focused on hands-free use of the phone. The vehicle’s voice recognition system sent the appropriate codes to the phone (for dialing, searching contact lists, and so forth) but otherwise played a pas-sive, facilitative role in the commu-nication. On modern GM systems I’m familiar with, and presumably other systems as well, the Bluetooth syn-chronization actually uploads data from the cell phone and stores the data in the vehicle’s computer data-base—without the user’s permission and possibly without the user’s knowl-edge. This compounds the privacy problem of a lost cell phone.

Even if cell phone data are en-crypted and the phone is locked, it is not that difficult to retrieve PINs, passwords, and encrypted data in plain text. Companies such as Celleb-rite (www.cellebrite.com) have, for decades, offered mobile forensics de-vices that serve this purpose. But the lowest hanging fruit in this attack vector is the automobile. The only data access protection that my car of-fers is a four-number PIN valet lock. This bad idea is both polished and refined: it offers limited data protec-tion against a determined adversary while at the same time making vehic-ular telematics inconvenient for the owner. Such bad ideas don’t just hap-pen naturally; they require serious effort from incompetent designers. This isn’t innovation: it’s enervation.

A similar situation applies to the access of data through the onboard di-agnostics (OBD) ports under the dash. While it seems reasonable to make diagnostic data available to manage

Page 39: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 37 J A N U A R Y 2 0 1 9 69

engine performance, optimize safety systems, and so on, when OBD ports became the preferred option for smog tests a decade or so ago, that opened an entirely new vulnerability to the car owner. While automobile manu-facturers could have restricted OBD information sharing to just those data of use to smog inspectors, instead they opened the OBD ports to a much wider variety of data—including historical accelerometer data, speed data, GPS data, and trip timings and usage data.

Originally, these “black box” OBD de-vices were used by insurance companies (e.g., Progressive’s Snapshot program) to award user premium rate discounts, i.e., drivers were even given discounts to have them installed in their vehicles. As any good personal injury attorney will attest, one of the first questions attorneys ask of accident investigators is whether the “other” vehicle had one installed so that it may be subpoenaed as evidence in court. Sans black box, an attempt will be made to recover vehicle data directly from the automobile. This information can be used by an insur-ance company to confirm good driving behavior, but it can also be used by per-sonal injury attorneys and prosecutors to justify liability claims for allegedly bad driving behavior. Somehow this equivalence just never seemed to reg-ister with the public. Incidentally, OBD black box devices are now popular gen-eral aftermarket automobile appliances for GPS tracking, monitoring driving behavior, and so on (https://www.black boxgps.com/products/blackbox-gps-3s- locator-obd-ii).

MARKETING VERSUS PRIVACY PROTECTIONI don’t mean to impart any special blame to General Motors or OnStar for breaches in personal security and privacy. All car manufacturers offer similar services. Ford SYNC, based on Microsoft’s Auto OS, offers the same range of services as GM/OnStar. The same may be said for LexusLink, BMW Assist, Mercedes Mbrace, and so forth. As near as I can tell, all manufacturers

approached telematics exclusively from a marketing point of view with little or no consideration for consumer privacy protection. This is not to deny the potential advantage to collision de-tection and reporting capabilities. Nor is it to criticize the use of motor vehi-cle event data recorders (MVEDR) per se. However, for detection and report-ing accidents, MVEDRs don’t require more than a few minutes of precrash recorded data collection to serve the passenger’s public safety interests. So, even if we assume that vehicle speed, engine revolutions per minute, service brake status, lateral acceleration, roll angles, antilock braking system sta-tus, seatbelt status, steering wheel po-sition, and airbag-related data would be useful to first responders, a simple first-in, first-out data collection strat-egy that would retain only the most

recent data would serve perfectly well. In other words, the claim that event data recorder information has to be retained for longer periods or shared with the manufacturer via telephone or satellite links doesn’t pass my smell test. That was essentially the issue that Sens. Franken and Coons raised with OnStar.

At the heart of privacy vulnerabil-ity is the manufacturer’s insistence on a simple, integrated vehicle data reten-tion policy that will serve all demands, e.g., crash reporting, smog inspection, manufacturer’s revenue potential from the sale of telematics options, manufacturer and third-party mar-keting and advertising revenue, and so forth. It is this oversimplistic inte-gration that leads to the problem. Of course, the rationale is obvious: auto-mobile manufacturers have discovered

that using and selling access to these data can be enormously profitable.15 Car companies and dealers are finding that the sale of customer data is an-other lucrative source of profit along with the interest and fees associated with car loans. However, unlike with car loans, the customer has no right of refusal regarding the sale of his or her personal data.

It is quite telling that automobile manufacturers have not packaged these data collection technologies in the form of optional modules that the customer may or may not purchase and that may be removed if the service is no longer desired. Manufacturers do not want to give customers that choice because 1) many would choose not to purchase these options and 2) the man-ufacturer would lose the opportunity to repurpose the data for profit. In the

case of my new car, the OnStar equip-ment is built into the car. Any attempt to disable or remove it not only disables other nonprivacy invasive systems like navigation, the entertainment system, and Bluetooth connectivity, but it also voids the manufacturer’s warranty. And there’s no way to avoid Internet connectivity on modern premium cars.16 My car will attempt to authenti-cate with all insecure proximate Wi-Fi networks whenever the car is started. If there’s a way to disable this feature, I haven’t found it. One need only read up on Operation G-Sting to confirm the dangers of all of this insecurity. The FBI aren’t the only ones listening!

With the proliferation of unwanted and unneeded passive interfaces like Bluetooth; Internet and Wi-Fi connec-tivity; the ability to invasively record passenger compartment audio and

Car companies and dealers are finding that the sale of customer data is another lucrative

source of profit along with the interest and fees associated with car loans.

68 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

OUT OF BAND

to remotely start the engine, lock or unlock the doors, open the rear deck hatch, and set off the car alarm. Many of these features are further configurable. Thus, the modern fob has taken on the role of the modern remote controller as-sociated with multimedia devices.

What’s the problem then? To begin with, RF appliances are never optimal for security-sensitive applications; RF neither respects individual privacy nor obeys property lines. So any communi-cations between car and fob should be viewed as broadcasts throughout the immediate neighborhood. This makes them susceptible to a gamut of hacks, ranging from denial-of-service (e.g., to deny vehicle access) to replay attacks, to name but two.

And this is nothing new. Computer scientist Avi Rubin has been lectur-ing about such vehicle insecurities for many years.8 The Center for Automo-tive Embedded Systems Security at the University of Washington, Seattle, and the University of California, San Diego, (www.autosec.org) has been conduct-ing research on vehicle telematics vul-nerabilities for even longer. One of the center’s classic papers from 20109 refer-ences articles on vehicle vulnerabilities as far back as the early 2000s. In a sub-sequent paper,10 these same research-ers evaluate a cornucopia of attack vec-tors that affect modern automobiles. One of this center’s projects, CarShark, provides work ing demonstrations of these vulnerabilities. Researchers there have since extended their work to include using vehicle telematics for driver profiling and fingerprinting. The irony that this research has been covered so extensively over the past 10 years that it has been featured in Popu-lar Science11 should not be overlooked.

Confirmation of these problems isn’t hard to obtain. Samy Kamkar (https://samy.pl) recently developed a suite of such attacks12 and reported the same in a 2015 DEFCON talk.13 His tool, OwnStar, runs a replay attack against OnStar fobs by inserting itself between a GM vehicle’s transceiver and either OnStar apps on mobile devices or the

fobs themselves. His video explains all of this in detail.

While OwnStar targets older RF-based keyless entry systems, modern vehicles use rolling code systems that prevent OwnStar replay attacks. Roll-ing codes use algorithms to generate code sequences based on pseudoran-dom numbers. As long as the vehicle transceiver and the fob/mobile app transceiver use the same seeds and rolling code algorithm, the sequences can be validated even if the codes are nonconsecutive. This means that, while continuously changing, rolling code generators suffer from the serious defect of predictability: once the algo-rithm is known, an endless sequence may be generated, each element of which can be determined to be legiti-mate. Based on this observation, Kam-kar developed RollJam,14 which offers a replay attack for modern RF-based keyless entry systems that use rolling codes. This reaffirms our observation that RF is really not effective when se-curity is important (i.e., if you want to prevent car, boat, or airplane theft; garages from being opened by home invaders; proximity card lock compro-mises; and so on).

A question naturally arises: Why do car companies use digital technol-ogies that are so easily compromised? In this case, challenge-response au-thentication based on a reasonable key derivation function would go a long way toward avoiding replay attacks. Such technology has been well under-stood and successfully deployed for decades, so why isn’t it used for key-less access systems? The answer is that manufacturers’ cost benefit analyses suggest that their legal exposure to the resulting safety deficiencies and secu-rity vulnerabilities from nonuse will not cost them much. In the absence of regulations with teeth or large-scale public blowback, there is little incen-tive to protect the customer. Since the beginning of the industrial revolution, the absence of risk has always been a strong disincentive to serious process improvement where product safety is

at issue. (Note, by the way, that these same vulnerabilities may apply to other remote access systems including remote garage door openers, proxim-ity card access systems, and so on).

But keyless access is not the great-est security and privacy vulnerability; far greater is the new cell phone syn-chronization environment. A decade or so ago, Bluetooth synchronization between a cell phone and the vehicle’s communication systems was focused on hands-free use of the phone. The vehicle’s voice recognition system sent the appropriate codes to the phone (for dialing, searching contact lists, and so forth) but otherwise played a pas-sive, facilitative role in the commu-nication. On modern GM systems I’m familiar with, and presumably other systems as well, the Bluetooth syn-chronization actually uploads data from the cell phone and stores the data in the vehicle’s computer data-base—without the user’s permission and possibly without the user’s knowl-edge. This compounds the privacy problem of a lost cell phone.

Even if cell phone data are en-crypted and the phone is locked, it is not that difficult to retrieve PINs, passwords, and encrypted data in plain text. Companies such as Celleb-rite (www.cellebrite.com) have, for decades, offered mobile forensics de-vices that serve this purpose. But the lowest hanging fruit in this attack vector is the automobile. The only data access protection that my car of-fers is a four-number PIN valet lock. This bad idea is both polished and refined: it offers limited data protec-tion against a determined adversary while at the same time making vehic-ular telematics inconvenient for the owner. Such bad ideas don’t just hap-pen naturally; they require serious effort from incompetent designers. This isn’t innovation: it’s enervation.

A similar situation applies to the access of data through the onboard di-agnostics (OBD) ports under the dash. While it seems reasonable to make diagnostic data available to manage

Page 40: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

38 ComputingEdge December 201970 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

OUT OF BAND

video; the integration of myriad sen-sors, cameras, and microphones; and the megaintegration of all of these compo-nents into an insecure multimedia and networked infrastructure, the potential for privacy abuse in modern automobiles is enormous. Add to that the profit mo-tive for the manufactures to use or sell these data, and we have a new frontier for privacy abuse, fraud, and theft. The question isn’t whether these new auto-mobile systems will be exploited to our cost, but when and to what degree.

This is not to deny that there are other manufacturers captur-ing these data. Mobile device

manufactures do the same thing. Lit-erally hundreds of smartphone apps are known to share such data as real- time GPS location with third-party vendors.17 It is not easy (and may not be possible) to shut such features off because the manufacture/provider ultimately has control over enabling/disabling services. However, at least in the case of mobile devices, you have the ability to shut the device off. That’s not an option with modern automobiles.

There are also more mundane pri-vacy exposures with such “large scale and covert collection of personal data” through Microsoft Offices’ ProPlus sub-scription,18 which shares motivations with vehicle telematics and mobile apps, but under the office productivity suite rubric. I’ll expand on this in a fu-ture column.

REFERENCES1. J. Johnson, “The talk of Las Vegas

is Operation G-String,” Los Angeles Times, Dec. 21, 2003. [Online]. Avail-able: http://articles.latimes.com /2003/dec/21/nation/na-strippers21

2. K. Poulsen, “Hacker disables more than 100 cars remotely,” Wired. Mar. 17, 2010. [Online]. Available: https://www.wired.com/2010/03 /hacker-bricks-cars/

3. H. Berghel, “Faith-based security,” Commun. ACM, vol. 51, no. 4, pp. 13–17, Apr. 2008.

4. R. McMillan, “With hacking, music can take control of your car,” IT World, Mar. 14, 2011. [Online]. Avail-able: http://www.itworld.com /security/139794/with-hacking-music- can-take-control-your-car?page=0%2C1)

5. D. McCullagh, “Court to FBI: No spying on in-car computers,” c|net, Nov. 19, 2003. [Online]. Available: http://news.cnet.com /Court-to-FBI-No-spying-on-in-car-computers/2100-1029_3-5109435.html

6. K. Hill, “OnStar kills its terrible plan to monitor non-customers’ driving,” Forbes, Sept. 27, 2011. [Online]. Avail-able: http://www.forbes.com /sites/kashmirhill/2011/09/27 /onstar-kills-its-terrible-plan-to-track- non-customers-driving-data/

7. J. A. Morrison, “Politicians come and go after serving corruption sen-tences,” The Las Vegas Rev.-J., Mar. 21, 2010. [Online]. Available: https://www.reviewjournal.com/news /news-columns/jane-ann-morrison /politicians-come-and-go-after- serving-corruption-sentences/

8. TED, “All your devices can be hacked,” 2011. [Online]. Available: https://www.ted.com/talks/avi_rubin_all_your_devices_can_be_hacked

9. K. Koscher et al., “Experimental se-curity analysis of a modern automo-bile,” in Proc. IEEE Symp. Security and Privacy, 2010, pp. 1–16.

10. S. Checkoway et al., “Comprehensive experimental analyses of automotive attack surfaces,” in Proc. USENIX Security Symp., 2011, pp. 77–92.

11. R. Boyle, “Proof-of-concept CarShark software hacks car computers, shutting down brakes, engines, and more,” Popular Science, May 14, 2010. [Online]. Available: https://www.popsci.com/cars/article /2010-05/researchers-hack-car- computers-shutting-down- brakes-engine-and-more

12. A. Greenberg, “The greatest hits of Samy Kamkar, YouTube’s favorite hacker,” Wired, Dec. 17, 2015. [On-line]. Available: https://www.wired .com/2015/12/the-greatest-

hits-of-samy-kamkar- youtubes-favorite-hacker/)

13. YouTube, “Drive it like you hacked it: New attacks and tools to wirelessly steal cars,” 2015. [Online]. Available: https://www.youtube.com /watch?v=UNgvShN4USU

14. A. Greenberg, “This hacker’s tiny device unlocks cars and opens garages,” Wired, Aug. 16, 2015. [Online]. Available: https://www.wired .com/2015/08/hackers-tiny-device- unlocks-cars-opens-garages/

15. P. W. Howard, “Data could be what Ford sells next as it looks for new revenue,” Detroit Free Press, Nov. 13, 2018. [Online]. Available: https://www.freep.com/story/money /cars/2018/11/13/ford-motor-credit- data-new-revenue/1967077002/

16. B. Schneier, Click Here to Kill Ev-erybody: Security and Survival in a Hyper-Connected World. New York: Norton, 2018.

17. J. DeVries, N. Singer, M. H. Keller, and A. Krolik, “Your apps know where you were last night, and they’re not keeping it secret,” New York Times, Dec. 10, 2018. [Online]. Available: https://www.nytimes.com /interactive/2018/12/10/business /location-data-privacy-apps.html? emc=edit_nn_p_20181210 &nl=morning-briefing&nlid= 76619836&section=topNews &te=1&fbclid=IwAR2MfaMW_jC9wK25lywuCNFWiSs2xa4SC-C3o61eaowMAeAcR7_ew_g0cYsY

18. C. Cimpanu, “Dutch government re-port says Microsoft Office telemetry collection breaks GDPR,” ZDNet, Nov. 14, 2018 [Online]. Available: https://www.zdnet.com /article/dutch-government- report-says-microsoft-office- telemetry-collection-breaks-gdpr/

HAL BERGHEL is an IEEE and ACM Fellow and a professor of computer science at the University of Nevada, Las Vegas. Contact him at [email protected].

This article originally appeared in Computer, vol. 52, no. 1, 2019.

Page 41: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

2469-7087/19 © 2019 IEEE Published by the IEEE Computer Society December 2019 39M AY 2 0 1 9 67

Over the last decades, we have witnessed tre-mendous progress in many fields of tech-nology, including optics, microelectronics, computer vision, and sensor networks. These

technological advances have leveraged the development of smart cameras, which perform sophisticated analytics of the captured imagery onboard the cameras.1,2 They trans-form the captured images into useful data, such as object classifications, person identifiers, and recognized actions, and may, therefore, not even deliver raw images anymore.

New applications beyond the traditional consumer electronics and commercial markets of mobile phones

and surveillance have been devel-oped wit h smar t cameras. They come in various forms and configu-rations and have pervaded many as-pects of our everyday life. We rely on cameras for monitoring the interior and exterior of modern cars, we en-joy enhanced photos and videos cap-

tured by selfie drones, and we even swallow cameras for medical examinations. Thus, smart cameras have become key devices for the Internet of Things (IoT) and cyber-physical systems and often operate in networks to assess situations in their environment. Progress in technology and applications over the last decades has changed our conception of cameras as boxes that capture images into a more general notion of cameras as smart and networked devices that generate data and make decisions.

A new camera paradigm has emerged in which smart cameras have become key system components with an advanced level of autonomy and complexity where the camera’s output is not intended for the eye of the be-holder anymore but serves as essential input for prompt control and complex decision-making algorithms. As in many applications in which humans are becoming

Can We Trust Smart Cameras?Bernhard Rinner, Universität Klagenfurt

Technological advances have changed our

conception of cameras as boxes that capture

images into the notion of networked, smart

devices that analyze their environment

and trigger decisions. Since smart cameras

have become key enabling components for

cyberphysical systems, trust in these devices is

gaining importance.

Digital Object Identifier 10.1109/MC.2019.2905171Date of publication: 14 May 2019

CYBER-PHYSICAL SYSTEMSEDITOR DIMITRIOS SERPANOS

ISI/ATHENA and University of Patras; [email protected]

Page 42: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

40 ComputingEdge December 201968 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

CYBER-PHYSICAL SYSTEMS

more eliminated from decision-mak-ing processes, trust in the overall sys-tem and its individual components is gaining importance. A key question, therefore, is whether current smart cameras already fulfill our trust ex-pectations. I will discuss some chal-lenging issues for smart cameras in the following sections.

CHALLENGES

Complexity and ReliabilityThe functional complexity and reli-ability of smart cameras are lagging behind for many real-world deploy-ments. Image processing was one of the driving applications for the recent progress in artificial intelligence (AI) and machine learn-ing, in particular deep neural networks. One prominent example is face recognition, where machine-learning algo-rithms have outperformed humans on some standard benchmarks.3 Although there are still important open research questions for face recognition (such as scalability, pose, aging effects, and training data), it is now embedded as a popular service on several mobile plat-forms. Cameras with image processing based on deep learning typically off-load complex and data-intensive com-putation onto high-performance com-puters or cloud infrastructures.

Such a hybrid approach, with a clear separation between onboard inference and offboard training, is hardly fea-sible for systems that require a high degree of autonomy and where the ad-vanced analysis of captured imagery is highly integrated with control and de-cision making. Robotics and self-driv-ing cars are prototypical domains for autonomous systems that operate in complex, dynamic, and interactive en-vironments. Here, embedded camera systems are used together with lidar and radar systems to detect, classify, and track relevant road users and pre-dict their future trajectories. Yet real

traffic scenarios include complex in-teractions among various road users, and handling complex clutter and modeling the interactions with other road users are necessary.4

Smart cameras face a complexity challenge in autonomous systems where visual information serves as the prime input modality, and under-standing the state of the environment is a fundamental precondition for low-level control and planning high-level actions. The performance of environ-ment perception is currently sufficient in dedicated settings and controlled environments. However, smart cam-eras still need to reach human-level

performance in real-world (outdoor) settings, and current segmentation, detection, and tracking accuracies do not yet suffice in difficult conditions.

Reliability issues are closely related to functional complexity. Reliable en-vironment perception is essential not only for high-safety applications, such as robotics and autonomous driving, but also for smart environments. These applications have very diverse set-tings and scales (from rooms to cities) and typically rely on events detected by cameras. Smart environments are built on a wide variety of algorithms, including motion, occupancy, and per-son detection, as well as more complex activity recognition. Despite the huge amount of captured data and the sen-sor fusion performed, robustness and reliability are still lagging behind ex-pectations, limiting the deployment of smart environments.

SecuritySmart cameras are profitable targets for security attacks, due to their omni-presence, multiple assets, and restricted

defense mechanisms. The Mirai attack represented a prominent example of exploiting smart camera platforms for a distributed denial-of-service (DDoS) attack.5 The malware enslaved vast numbers of insufficiently secured smart cameras into a botnet, which was then used to launch the DDoS attack. Although the Mirai malware followed the very simple strategy of scanning for IoT devices still using their default pass-words, the consequences were dramatic, with hundreds of sites and services be-ing rendered inaccessible to millions of people worldwide for several hours.

In the Mirai attack, smart cameras were used to launch a DDoS attack be-

cause of the lack of ba-sic security practices. Cameras are also the target of attacks. Vari-ous security flaws have been recently reported with IoT-based surveil-lance cameras.6 These vulnerabilities could

essentially let an attacker view footage from every smart camera connected online, completely disable the camera, and use it to get inside your com-puter’s network.

Smartphones are particular cam-era devices that serve as source, des-tination, and manipulator of a huge amount of data and thus provide many assets that need protection. Examples of such assets include personal user data (keys, fingerprints, etc.), content providers’ data (protection from ille-gal copying on the user’s platform), and smartphone system protection (trusted boot- and real-time kernel pro-tection). Recent smartphone platforms rely on separate, isolated coprocessors, such as ARM’s TrustZone and Apple’s Secure Enclave, which are built into the platform’s main system on chip, or Google’s Titan M, which is realized as a dedicated security chip.

Hardware-enhanced protection significantly raises the security bar on these platforms. It is much harder to break than pure software defenses but still requires careful implementation

We rely on cameras for monitoring the interior and exterior of modern cars, we enjoy enhanced photos and videos captured by selfie drones, and we even

swallow cameras for medical examinations.

Page 43: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 41 M AY 2 0 1 9 69

of security functionality and remains vulnerable, particularly to side-chan-nel attacks. The challenge of security remains a complex, multidimensional problem that has to be addressed in a holistic way. Security needs to become a primary design consideration, lever-age proactive defense techniques, and involve all stake-holders to protect data and devices throughout their entire lifetime.

PrivacySma r t ca mera s pose threats to our priva-cy. Privacy concerns have been raised by the rapidly increasing number of visual-data-capturing devices. Not only surveillance cameras but also other video-capable multimedia de-vices threaten privacy.7 Due to the confluence of the availability of huge amounts of visual data, AI-based data analytics, and big data networks, iden-tities can be detected, and behaviors of individuals can be analyzed at a large scale. Camera networks at airports, train stations, and shopping malls monitor millions of persons every day and allow the analysis of their move-ment patterns and activities. Although real-time analytics is limited for most of the currently deployed camera net-works, huge amounts of video data are (at least temporarily) stored and can be analyzed offline and merged with oth-er data sources.

Encryption is a widely used pri-vacy protection approach that pro-vides access to the complete raw data to privileged users, that is, those pos-sessing the decryption keys. For all other users, the visual data remain protected and have no utility. Ano-nymization is a more gradual pro-tection approach in which sensitive parts of visual data, such as faces or text, are deteriorated such that iden-t i f icat ion becomes more d i f f ic u lt for both humans and algorithms. Typical anonymization techniques include pixelation or blurring, and more advanced methods can adapt

the deterioration strength or replace sensitive parts with a deidentified representation.7 The applied ano-nymization technique influences the achieved privacy protection level and the remaining utility of the deteri-orated visual data. Anonymization enables customizing the protection

level and utility according to various factors, such as application require-ments and user preferences.

Anot her risk to privac y comes from data linkage. Even if visual data captured from personal devices (e.g., smartphones or body cameras) and pri-vate spaces (e.g., home security, assisted living, or monitoring applications) are encrypted or anonymized, their linkage can reveal sensitive personal informa-tion about individuals. More advanced techniques, such as privacy-aware re-porting or secure multiparty computa-tion, help to mitigate data-linkage risks. However, they are rarely deployed in current camera applications.

The General Data Protection Regu-lation (GDPR) affects camera manufac-turers and system operators and helps raise the protection level and privacy awareness. GDPR, which has been in effect throughout the European Union (EU) since May 2018, regulates data pro-tection and privacy for all individuals within the EU and the European Eco-nomic Area (EEA). It further addresses the export of personal data outside the EU and EEA areas.8 GDPR requires var-ious organizational and technological procedures to implement its key princi-ples, such as “protection by design and by default,” “right of access,” and “right to erasure.”

Resource LimitationsFinally, smart cameras face a resource ef f icienc y cha l lenge. The steadi ly

increasing functionality of smart cam-eras imposes strong resource require-ments for the platform along multiple dimensions, including processing, storage, data transfer, and energy.9,10 Consequently, the overall design space for the camera platform is complex, and the tradeoff among resources,

cost, size, and various other factors needs to be explored. Integrat-ing as a system-on-chip is a widely used ap-proach for developing r e s o u r c e - e f f i c i e n t plat for ms. Recent ly,

nanotechnology has been explored as a fundamentally new platform for smart cameras.11

As smart cameras will con-t i nue to per vade ou r d a i ly life, their functionality, reli-

ability, security, and efficiency need to be continuously advanced. This requires research and development efforts covering different fields, in-cluding hardware, software, and sys-tem elements. These technological advances can serve only as a basis of trust. They must be complemented by various nontechnical measures to establish sustainable trust in smart cameras.

REFERENCES1. B. Rinner and W. Wolf, “Introduction

to distributed smart cameras,” Proc. IEEE, vol. 96, no. 10, pp. 1565–1575, 2008.

2. M. Reisslein, B. Rinner, and A. Roy-Chowdhury, “Smart camera networks,” Computer, vol. 47, no. 5, pp. 23–25, May 2014.

3. Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proc. 2014 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Columbus, OH, 2014, pp. 1701–1708.

4. W. Schwarting, J. Alonso-Mora, and D. Rus, “Planning and

In the Mirai attack, smart cameras were used to launch a DDoS attack because of the lack of basic

security practices.

68 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

CYBER-PHYSICAL SYSTEMS

more eliminated from decision-mak-ing processes, trust in the overall sys-tem and its individual components is gaining importance. A key question, therefore, is whether current smart cameras already fulfill our trust ex-pectations. I will discuss some chal-lenging issues for smart cameras in the following sections.

CHALLENGES

Complexity and ReliabilityThe functional complexity and reli-ability of smart cameras are lagging behind for many real-world deploy-ments. Image processing was one of the driving applications for the recent progress in artificial intelligence (AI) and machine learn-ing, in particular deep neural networks. One prominent example is face recognition, where machine-learning algo-rithms have outperformed humans on some standard benchmarks.3 Although there are still important open research questions for face recognition (such as scalability, pose, aging effects, and training data), it is now embedded as a popular service on several mobile plat-forms. Cameras with image processing based on deep learning typically off-load complex and data-intensive com-putation onto high-performance com-puters or cloud infrastructures.

Such a hybrid approach, with a clear separation between onboard inference and offboard training, is hardly fea-sible for systems that require a high degree of autonomy and where the ad-vanced analysis of captured imagery is highly integrated with control and de-cision making. Robotics and self-driv-ing cars are prototypical domains for autonomous systems that operate in complex, dynamic, and interactive en-vironments. Here, embedded camera systems are used together with lidar and radar systems to detect, classify, and track relevant road users and pre-dict their future trajectories. Yet real

traffic scenarios include complex in-teractions among various road users, and handling complex clutter and modeling the interactions with other road users are necessary.4

Smart cameras face a complexity challenge in autonomous systems where visual information serves as the prime input modality, and under-standing the state of the environment is a fundamental precondition for low-level control and planning high-level actions. The performance of environ-ment perception is currently sufficient in dedicated settings and controlled environments. However, smart cam-eras still need to reach human-level

performance in real-world (outdoor) settings, and current segmentation, detection, and tracking accuracies do not yet suffice in difficult conditions.

Reliability issues are closely related to functional complexity. Reliable en-vironment perception is essential not only for high-safety applications, such as robotics and autonomous driving, but also for smart environments. These applications have very diverse set-tings and scales (from rooms to cities) and typically rely on events detected by cameras. Smart environments are built on a wide variety of algorithms, including motion, occupancy, and per-son detection, as well as more complex activity recognition. Despite the huge amount of captured data and the sen-sor fusion performed, robustness and reliability are still lagging behind ex-pectations, limiting the deployment of smart environments.

SecuritySmart cameras are profitable targets for security attacks, due to their omni-presence, multiple assets, and restricted

defense mechanisms. The Mirai attack represented a prominent example of exploiting smart camera platforms for a distributed denial-of-service (DDoS) attack.5 The malware enslaved vast numbers of insufficiently secured smart cameras into a botnet, which was then used to launch the DDoS attack. Although the Mirai malware followed the very simple strategy of scanning for IoT devices still using their default pass-words, the consequences were dramatic, with hundreds of sites and services be-ing rendered inaccessible to millions of people worldwide for several hours.

In the Mirai attack, smart cameras were used to launch a DDoS attack be-

cause of the lack of ba-sic security practices. Cameras are also the target of attacks. Vari-ous security flaws have been recently reported with IoT-based surveil-lance cameras.6 These vulnerabilities could

essentially let an attacker view footage from every smart camera connected online, completely disable the camera, and use it to get inside your com-puter’s network.

Smartphones are particular cam-era devices that serve as source, des-tination, and manipulator of a huge amount of data and thus provide many assets that need protection. Examples of such assets include personal user data (keys, fingerprints, etc.), content providers’ data (protection from ille-gal copying on the user’s platform), and smartphone system protection (trusted boot- and real-time kernel pro-tection). Recent smartphone platforms rely on separate, isolated coprocessors, such as ARM’s TrustZone and Apple’s Secure Enclave, which are built into the platform’s main system on chip, or Google’s Titan M, which is realized as a dedicated security chip.

Hardware-enhanced protection significantly raises the security bar on these platforms. It is much harder to break than pure software defenses but still requires careful implementation

We rely on cameras for monitoring the interior and exterior of modern cars, we enjoy enhanced photos and videos captured by selfie drones, and we even

swallow cameras for medical examinations.

Page 44: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

42 ComputingEdge December 201970 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

CYBER-PHYSICAL SYSTEMS

decision-making for autonomous vehicles,” Annu. Rev. Control, Robotics, Autonomous Syst., vol. 1, pp. 187–210, 2018. doi: 10.1146/annurev-control-060117-105157.

5. S. Khandelwal, “Chinese electronics firm to recall its smart cameras recently used to take down Inter-net,” Hacker News, Oct. 24, 2016. Accessed on: Dec. 25, 2018. [Online]. Available: https://thehackernews.com/2016/10/iot-camera-mirai-ddos.html

6. V. Dashchenko and A. Muravitsky, “Somebody’s watching! When cameras are more than just ‘smart,’” Kaspersky Lab, Mar. 12, 2018. Ac-cessed on: Dec. 25, 2018. [Online]. Available: https://securelist.com

/somebodys-watching-when-cameras-are-more-than-just-smart/84309/

7. T. Winkler and B. Rinner, “Security and privacy protection in visual sensor networks: A survey,” ACM Comput. Surv., vol. 47, no. 1, pp. 38, 2014. doi: 10.1145/2545883.

8. Wikipedia, “General data protection regulations.” Accessed on: Dec. 31, 2018. [Online]. Available: https://en.wikipedia.org/wiki/General_Data_Protection_Regulation

9. J. C. SanMiguel and A. Cavallaro, “Energy consumption models for smart camera networks,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 12, pp. 2661–2674, 2017.

10. C. Bobda and S. Velipasalar, Eds., Distributed Embedded Smart

Cameras. New York: Springer, 2014.

11. J. Simonjan, J. M. Jornet, I. Akyildiz, and B. Rinner. “Nano-cameras: A key enabling technology for the Internet of multimedia nano-things,” in Proc. 5th ACM/IEEE Int. Conf. Nanoscale Computing and Communication, Reykjavik, Iceland, Sept. 2018. doi: 10.1145/3233188.3233209.

BERNHARD RINNER is a professor of pervasive computing at Univer-sität Klagenfurt and deputy head of the Institute of Networked and Embedded Systems. Contact him at [email protected].

For more information on paper submission, featured articles, calls for papers, and subscription links visit: www.computer.org/tsusc

SUBSCRIBE AND SUBMIT

IEEE TRANSACTIONS ON

SUSTAINABLE COMPUTINGSUBMITTODAY

Digital Object Identifier 10.1109/MC.2019.2911837

This article originally appeared in Computer, vol. 52, no. 5, 2019.

Rejuvenating Binary Executables ■ Visual Privacy Protection ■ Communications Jamming

January/February 2016Vol. 14, No. 1

Policing Privacy ■ Dynamic Cloud Certification ■ Security for High-Risk Users

March/April 2016Vol. 14, No. 2

IEEE Symposium onSecurity and Privacy

Smart TVs ■ Code Obfuscation ■ The Future of Trust

May/June 2016Vol. 14, No. 3

IEEE Symposium onIEEE Symposium onIEEE Symposium onIEEE Symposium onIEEE Symposium onSecurity and PrivacySecurity and PrivacySecurity and PrivacySecurity and PrivacySecurity and PrivacySecurity and Privacy

IEEE Security & Privacy magazine provides articles with both a practical and research bent by the top thinkers in the fi eld.• stay current on the latest security tools and theories and gain invaluable practical and research knowledge,• learn more about the latest techniques and cutting-edge technology, and• discover case studies, tutorials, columns, and in-depth interviews and podcasts for the information security industry.

computer.org/security

Page 45: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

2469-7087/19 © 2019 IEEE Published by the IEEE Computer Society December 2019 43

23mic03-llorente-2918079.3d (Style 5) 12-07-2019 15:38

A Disaggregated CloudArchitecture for EdgeComputing

Rafael Moreno-VozmedianoUniversidad Complutense de Madrid

Eduardo HuedoUniversidad Complutense de Madrid

Rub�en S. MonteroUniversidad Complutense de Madrid

Ignacio M. LlorenteUniversidad Complutense de Madrid

Harvard University

Abstract—The use of a disaggregated cloud architecture would allow companies and

institutions to deploy their own edge computing infrastructure and services by combining

their on-premises cloud infrastructure withmultiple highly dispersed edge nodes leased

from third-party providers.

& THERE IS GROWING interest, both from

researchers and industry, in edge and fog com-

puting paradigms. These have emerged as a solu-

tion for overcoming the limitations of traditional

cloud computing platforms to support low-

latency environments. According to recent esti-

mates, over 20 billion devices will be connected

to the network by 2020,1 which will result in expo-

nential growth in the amount of data with low-

latency processing needs.

There have been multiple reports of use-

cases in which services, or at least parts of

them, were run much more efficiently next to

where the data were created and/or consumed.

A good example is that of a web platform offering

low-latency services to users distributed in

various geographic locations (e.g., e-healthcare

services, remote driving assistance, online

immersive games, etc.). This platform can be

implemented as a central web server and a vari-

able number of application servers are distrib-

uted over several edge locations. Another

example is the emerging Internet of Things (IoT),

which requires data processing near the devices

in question.

Digital Object Identifier 10.1109/MIC.2019.2918079

Date of current version 15 July 2019.

Department: View from the CloudDepartment: View from the CloudEditor: George Pallis, [email protected]

May/June 2019 Published by the IEEE Computer Society 1089-7801 � 2019 IEEE 31

Page 46: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

44 ComputingEdge December 2019

23mic03-llorente-2918079.3d (Style 5) 12-07-2019 15:38

These use-cases usually require decoupling

the application into two parts: one residing in a

centralized cloud and another running on the

network edge. For example, cloud providers

including AWS and Microsoft Azure are starting

to offer software solutions to extend their cloud

services to the edge. They are not offering edge

infrastructure, but rather, service software that

extends cloud data processing functionality to

IoT devices. AWS IoT Greengrass extends AWS

to edge devices so they can act locally upon the

data they generate. This service requires the

installation of the AWS IoT Greengrass Core soft-

ware on a Linux system to allow the local execu-

tion of AWS Lambda code, messaging, data

caching, and security for a group of nearby IoT

devices. Greengrass works as a cache and a hub

for local AWS-enabled devices. Azure follows a

similar approach with Azure IoT edge.

Many companies need a way to elastically

serve the dynamic demand of the edge portion

of these emerging applications, for example, in

application servers or IoT hubs. Moreover, in

most cases, the end users and IoT devices are

distributed over various geographic locations.

Therefore, these companies require a framework

capable of efficiently deploying and managing

such distributed edge computing infrastructure

which can dynamically and opportunistically

support the execution of these edge hub serv-

ices. Thus, here we define a disaggregated cloud

model that can manage and combine both the

local physical resources of on-premises data

centers and the resources of multiple highly dis-

persed edge nodes to build dynamic, agile, and

decentralized edge computing environments.

One current question is if the main cloud pro-

viders are not offering infrastructure at the edge,

then who is? To meet the needs of edge comput-

ing, the main telecoms companies are building

new 5G infrastructures to support faster mobile

data speeds, thereby adding server power to

their cell towers and central offices. These com-

panies plan to offer these thousands of comput-

ing distribution points as a service to the

providers of applications who require data to be

transmitted and received with no latency. Mean-

while, smaller hosting providers including Packet

and others in the Kinetic Edge Alliance are intro-

ducing edge infrastructures with scattered

microdata centers that offer latencies of less than

10ms into themain international locations.

The disaggregated cloud architecture we pro-

pose here can manage small to medium edge

computing infrastructures comprising several

tens to hundreds of geographically distributed

edge resources. These could be leased to the

aforementioned third-party providers who offer

on-demand resources located at the edge of the

network in the form of bare-metal hosts or vir-

tual servers. Furthermore, our model could eas-

ily be extended to larger platforms by using a

peer-to-peer decentralized federation of clouds.

DISAGGREGATED CLOUDARCHITECTURE

Most edge computing environments (e.g.,

OpenStack Trio2o, or ENORM2) are based on a

distributedmanagement model, where each edge

node is independent and is managed by its own

management platform—also known as a Cloud

operating system (OS). These distributed models

include an upper-level edge gateway responsible

for interacting with clients which routes client

requests to the appropriate bottom edge nodes.

This solution is highly scalable; however, it dele-

gates most complex management tasks to the

edge nodes, which can consume a lot of their

resources. Furthermore, installing, configuring,

and maintaining many Cloud OS instances can be

an extremely complex and time-consuming task.

Here we propose a disaggregated manage-

ment model based on a centralized Cloud OS

that manages the local resources of an on-prem-

ises data center as well as those of different edge

nodes, as shown in Figure 1. This model dele-

gates most of the previously mentioned com-

plexity to the Cloud OS, thus releasing the edge

nodes from multifarious management tasks.

The main stakeholders on the disaggregated

cloud architecture are as follows:

� Edge Application Providers: These are the

companies offering edge applications to their

customers. They run their own Cloud OSs to

build a disaggregated private cloud that

dynamically allocates physical resources at

the edge offered by edge infrastructure pro-

viders andmay also use resources from an on-

premises cloud. They can deploy different

View from the Cloud

32 IEEE Internet Computing

Page 47: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 45

23mic03-llorente-2918079.3d (Style 5) 12-07-2019 15:38

types of services, such as web services using

edge application servers to provide location-

aware services and reduce latency,3 e-Health

applications using edge e-Health gateways for

rapid emergency responses,4 or IoT applica-

tions using edge IoT hubs for edge analytics.5

� Edge Infrastructure Providers: These are

third-party providers that lease out physical

resources at the edge in the form of bare-

metal servers.� Public Cloud Providers: The Cloud OS can

also interact with one or more external cloud

providers, including AWS or Azure, to offer

cloud bursting capabilities.

The main functions of the Cloud OS are as

follows:

� Management: As a cloud management plat-

form,6 it manages physical resources located

both in the on-premises cloud and the vari-

ous dispersed edge locations and provides

users with uniform access to this disaggre-

gated resource pool. It is also responsible for

the deployment and life-cycle management

of virtual resources overlaid onto this physi-

cal infrastructure. It must be able to provide

different virtualization technologies, such as

virtual machines (VMs) or system contain-

ers, according to the service needs.� Auto-scaling: The CloudOSmust also provide

auto-scaling capabilities to accommodate the

anticipated number of virtual resources

demanded by the service. In this paper, we

only consider reactive threshold-basedmech-

anisms7 that allow service providers to define

scaling rules based on different performance

metrics. For each service component and

location (edge node or cloud site), these rules

are defined in terms of specific upper and

lower thresholds for the selected metric.

Thus, if a metric falls above or below these

cut-offs at a given moment and specific loca-

tion, a scaling action is triggered which adds

or removes virtual resources at this position.� Monitoring: In terms of scalability, the moni-

toring system is one of the most critical com-

ponents of the Cloud OS and one of its key

parameters is the monitoring interval, i.e.,

the time that elapses between two consecu-

tive readings of the monitoring metrics. This

interval depends on several factors8 includ-

ing the number of resources to be monitored,

their load, and the network latency between

them and the monitoring system.

The key challenges of this disaggregated

model are scalability, orchestration, availability,

and security. A disaggregated architecture would

not scale properly for very large edge computing

environments, but it is eminently suitable for

small tomediumprivate edge infrastructures con-

sisting of tens of thousands of resources. The

model can be extended to larger, worldwide

Figure 1. Disaggregated cloud architecture.

May/June 2019 33

Page 48: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

46 ComputingEdge December 2019

23mic03-llorente-2918079.3d (Style 5) 12-07-2019 15:38

platforms by using a peer-to-peer decentralized

federation of cloud instances.

Regarding the orchestration and availability

of geo-distributed virtual resources over multiple

edge locations, the Cloud OS should allow the

specification of placement constraints through

the definition of VM groups and affinity rules over

multiple availability zones, clouds, and edge loca-

tions.9 The application must be aware of the loca-

tion of its users (managing also their mobility),

and it is responsible for defining where the ser-

vice components must be deployed.

The use of a single Cloud OS instance greatly

simplifies security management, as it provides

centralized tools for user authentication, access

control, definition of security groups, and secure

communication with edge nodes. Moreover, ser-

vice level agreements with the edge infrastructure

providers guarantee the integrity, availability, and

authenticity of the edge nodes. In addition, secure

VPN tunnels can be implemented to provide integ-

rity and authentication at application level.10

PROOF OF CONCEPT AND RESULTSIn this section, we show the viability of the

disaggregated cloud architecture. Our experi-

mental setup comprises a Cloud OS based on

OpenNebula, an on-premises cloud, and a single

edge node, located one hop from the clients/

devices. OpenNebula is an ideal candidate for

managing a disaggregated cloud because it is

functional, simple, flexible, and light.

For the workload, we consider a generic IoT

scenario such as the one shown in Figure 1. We

assume that the application exhibits a typical per-

formance behavior, with each device, on average,

generating a request every ten seconds and each

request consuming ten ms of CPU and eight ms of

I/O. Figure 2(a) shows the response time ðTRÞ of a

single application server as a function of the

server load, represented by the number of devi-

ces accessing to the server. Furthermore, we also

assume that the number of devices increases line-

arly from 100 and 2500, at a rate of 20 new devices

per minute. We used VMs as the virtualization

technology, however, the same mechanisms can

also be used for system containers.

We analyzed and compared the following

scenarios:

� Classic cloud: No edge resources were con-

sidered and all the service components were

deployed within the on-premises cloud

infrastructure.� Distributed edge: Each edge node was man-

aged by its own Cloud OS, and service com-

ponents were deployed within the edge node

closest to the devices.� Disaggregated cloud: The edge nodes were

managed by a central Cloud OS, and the ser-

vice components were deployed in the edge

node or cloud site closest to the devices.

The following parameters were used in these

scenarios:

� Network latency ððTNLÞ. We assumed that

there is a 50 ms network latency between the

devices and the computing resources for the

on-premises cloud (classic cloud) and 10 ms

for the edge node (distributed edge and

disaggregated cloud). These values were

experimentally obtained from the different

cloud and edge providers (AWS, Azure, and

Packet).� Auto-scaling metric and thresholds: To

implement the threshold-based auto-scaling

mechanism, we used the TR as the

service-performance metric. The scale-up and

Figure 2. (a) Performance profile of a single application server. (b) Total response time.

View from the Cloud

34 IEEE Internet Computing

Page 49: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

www.computer.org/computingedge 47

23mic03-llorente-2918079.3d (Style 5) 12-07-2019 15:38

scale-down thresholds were set to 60 and

25 ms, respectively, as shown in Figure 2(a).� Monitoring interval ððTMIÞ: The monitoring

interval is higher in the disaggregated cloud,

where the Cloud OS must monitor both the

local resources and a variable number of

remote edge resources. We assumed a

one-minute monitoring interval for the clas-

sic cloud and distributed edge setup, and a

two to five-minute interval for the disaggre-

gated cloud setup, depending on its size.� Auto-scaling sustained period ððTSP Þ: To

avoid unnecessary fluctuations in the num-

ber of resources assigned to the service, the

condition that triggers an auto-scaling action

(i.e., when the monitoring metric exceeds the

established threshold) must be met for a sus-

tained period; in this case, we set this period

to three minutes.� VM start-up time ððTVSÞ: This parameter

depends on multiple factors.11 In most popu-

lar cloud platforms, it takes between one and

six minutes. However, in real tests on Packet

edge servers, we measured start-up times of

four–24 s; here, we considered a start-up

time of 1 minute.� Total auto-scaling delay ððTADÞ: This repre-

sents the total time elapsed between the

occurrence of a condition that triggers an

auto-scaling action and the provision (or

retraction) of a VM. It can be computed as

TAD ¼ TMI þ TSP þ TVS:

Thus, using the aforementioned values, the

total auto-scaling delay was calculated as five

minutes for the classic cloud and the distrib-

uted edge scenarios and between six and nine

minutes for the disaggregated cloud scenario.

Considering all these parameters, we ran our

experiment for two hours; Figure 2(b) shows the

total response time (Ttotal), taking into account

both the application response time ðTRÞ and the

network latency ðTNLÞ. The peaks in these graphs

represent the instants before the provision of a

new VM, and so, the Ttotal dropped after these

were made available. Because it supports the

lowest network latency and auto-scaling delay,

the distributed edge had the lowest average Tto-

tal (63 ms) whereas the disaggregated cloud had

a slightly higher average (64.1 ms to 68.5 ms)

resulting from its higher auto-scaling delay.

Finally, the classic cloud had the highest average

Ttotal (143 ms) because of its higher network

latency.

CONCLUSIONThe disaggregated cloud model proposed in

this paper represents a good trade-off between

performance and design complexity. Compared

to the more complex distributed edge architec-

ture based on multiple Cloud OS instances (one

per edge node) the disaggregated cloud can

manage both on-premises and edge resources

with a single Cloud OS instance and with minimal

application performance degradation.

ACKNOWLEDGMENTSThis work was supported in part by Minis-

terio de Ciencia, Innovaci�on y Universidades

under Grant RTI2018-096465-B-I00, and in part by

Comunidad de Madrid through the Research

Program EDGEDATA-CM (P2018/TCS4499).

& REFERENCES

1. M. Hung, “Leading the IoT: Gartner insights on how to

lead in a connected world,” Gartner Inc., Stamford,

CT, USA, 2017.

2. N. Wang, B. Verghese, M. Matthaiou, and D. S.

Nikolopoulos, “ENORM: A framework for edge NOde

resource management,” IEEE Trans. Serv. Comput., to

be published, doi: 10.1109/TSC.2017.2753775.

3. J. Zhu et al., “Improving web sites performance using

edge servers in fog computing architecture,” in Proc.

IEEE 7th Int. Symp. Service-Oriented Syst. Eng., 2013,

pp. 320–323.

4. A. M. Rahmani et al., “Exploiting smart e-health

gateways at the edge of healthcare internet-of-things,”

FutureGener. Comput. Syst., vol. 78, pp. 641–658, 2018.

5. P. Patel, M. Intizar Ali, and A. Sheth, “On using the

intelligent edge for IoT analytics,” IEEE Intell. Syst.,

vol. 32, no. 5, pp. 64–69, Sep./Oct. 2017.

6. R. Moreno-Vozmediano, R. S. Montero, and

I. M. Llorente, “IaaS cloud architecture: From virtualized

datacenters to federated cloud infrastructures,”

Computer, vol. 45, no. 12, pp. 65–72, 2012.

7. T. Lorido-Botran, J. Miguel-Alonso, and J. A. Lozano,

“A review of auto-scaling techniques for elastic

applications in cloud environments,” J. Grid Comput.,

vol. 12, no. 4, pp. 559–592, 2014.

May/June 2019 35

Page 50: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

48 ComputingEdge December 2019

23mic03-llorente-2918079.3d (Style 5) 12-07-2019 15:38

8. J. S. Ward and A. Barker, “Cloud cover: Monitoring

large-scale clouds with varanus,” J. Cloud Comput.,

vol. 4, no. 1, pp. 1–28, 2015.

9. R. Moreno-Vozmediano, R. S. Montero, E. Huedo, and

I. M. Llorente, “Orchestrating the deployment of high

availability services on multi-zone and multi-cloud

scenarios,” J. Grid Comput., vol. 16, no. 1, pp. 39–53,

2018.

10. R. Moreno-Vozmediano, R. S. Montero, E. Huedo, I. M.

Llorente, “Cross-site virtual network in cloud and fog

computing,” IEEE Cloud Comput., vol. 4, no. 2, pp. 46–

53, Mar./Apr. 2017.

11. M. Mao and M. Humphrey, “A performance study on

the VM startup time in the cloud,” in Proc. IEEE 5th Int.

Conf. Cloud Comput., 2012, pp. 423–430.

Rafael Moreno-Vozmediano is an Associate

Professor with Universidad Complutense de Madrid,

Madrid, Spain. His research interests include distrib-

uted computing, edge/cloud computing, and net-

working. Contact him at [email protected].

Eduardo Huedo is an Associate Professor with

Universidad Complutense de Madrid, Madrid, Spain.

His research interests include distributed computing,

cloud computing, and performance management.

Contact him at [email protected].

Rub�en S. Montero is Cofounder and Chief Archi-

tect with OpenNebula, and an Associate Professor

with Universidad Complutense de Madrid, Madrid,

Spain. His research interests include resource provi-

sioning models for distributed systems and cloud

computing. Contact him at [email protected].

Ignacio M. Llorente is Cofounder and Director

of OpenNebula, Full Professor with Universidad

Complutense de Madrid, Madrid, Spain, and Visit-

ing Professor with Harvard University, Cambridge,

MA, USA. His research interests include distrib-

uted, parallel, and data-intensive computing tech-

nologies, and innovative applications of those

technologies to business and scientific problems.

He is a Senior Member of IEEE. Contact him at

[email protected].

View from the Cloud

36 IEEE Internet Computing

This article originally appeared in IEEE Internet Computing, vol. 23, no. 3, 2019.

Page 51: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

2469-7087/19 © 2019 IEEE Published by the IEEE Computer Society December 2019 49

COLUMN: Backspace

Oscillation

Even today, computing technology continues to

oscillate between localized and centralized data

storage and processing.

When computers were first developed in the 1930s and 1940s (not counting Babbage’s Differ-ence Engine and other similar machines of the 19th Century), they tended to be large and single-program-at-a-time systems. We referred to “batch programming” in which only one computing job at a time consumed the entire machine. The machines got faster and one saw illustrations of large buildings theoretically filled with computer equipment that would run big computer pro-grams in sequence. But then two strange things happened. The idea of making a machine share its computing power among multiple users (“time sharing”) was invented in the early 1960s and machines began to get smaller and less expensive while delivering the same or even more com-puting capacity. Eventually, the centralized computing model was replaced by departmental computers and then by personal work stations, desk and laptop computers, and now, by smart phones and tablets. Concurrent with this evolution, computer networking evolved from the 1960s eventually to become the global Internet, linking many of these computing devices together. The personal computers were reconfigured and repurposed to fill warehouses full of racks of comput-ers that could be shared by millions of users, returning us to the age of shared, centralized com-puting systems but with a major distributed component of memory and computing capacity.

Now a new trend is emerging: so-called edge computing with powerful computing capability po-sitioned at the edges of the network, where personal devices also reside. The centralized facilities (warehouses) are still there but now we have distributed computing power once again at multiple levels in organizations and even residential settings. The Internet of Things will reinforce this trend. Billions of small, programmable devices will populate our offices, homes, automobiles and our persons. They will interact with and sometimes be protected by local computing capacity that shields the smaller devices from various kinds of attack or which serve as a kind of backup in case the Internet becomes unreachable. We see a related trend as content distribution services such as Akamai download and store content (e.g. music, videos and commonly referenced data-bases) to put them closer to the user’s access to the Internet.

The result of all this is a persistent oscillation between localized computing and data storage and centralized storage and processing. While these oscillations take place, we also end up with con-current centralized and distributed models operating across the Internet. Architecturally speak-ing, these phenomena introduce a strong demand for synchronization of content in a highly distributed environment. This is a nontrivial problem that calls for the deep thinking that led to Leslie Lamport’s PAXOS family of protocols. The implications of this line of reasoning suggest that the oscillation between centralized and distributed computing will continue and that compu-ting and storage will become normal adjuncts of almost all environments. In the extreme, we may find computing technologies such as the memristor becoming a normal part of many com-puting elements so that the classical Von Neumann architecture that separates computing from

Vinton G. Cerf Google

64IEEE Internet Computing Published by the IEEE Computer Society

1089-7801/18/$33.00 USD ©2018 IEEEJuly/August 2018

Page 52: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

50 ComputingEdge December 2019

IEEE INTERNET COMPUTING

memory blurs into a blended architecture that produces phenomena such as “instant on” that eliminate the need for “booting up” the operating system. Computation can continue where it was when power was turned off.

As newer computing technology emerges, perhaps in the form of bio-computing and storage sys-tems, the ubiquity of computing and communication will become even more pronounced, lead-ing the way towards the vision of Mark Weiser, who coined the term “ubiquitous computing” 30 years ago to describe his concept that computing and communications would blend into the background in such a way that we would not make much of a distinction between computers and everything else. Everything would have computational capacity and communication would be embedded into an omnipresent infrastructure. As we experience new modalities of interaction with computing via voice and gesture, this vision may rapidly become reality.

BIO Vinton G. Cerf is vice president and chief Internet evangelist at Google, and past president of ACM. He’s widely known as one of the “fathers of the Internet.” He’s a Fellow of IEEE and ACM. Contact him at [email protected].

65July/August 2018 www.computer.org/internet

This article originally appeared in IEEE Internet Computing, vol. 22, no. 4, 2018.

Page 53: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

CAREER OPPORTUNITIES

www.computer.org/computingedge 51

www.aspenconsultinggroup.com

SOFTWARE VULNERABILITY RESEARCH POSITIONAspen Consulting Group is seeking a highly moti-vated individual with academic or industry expe-rience to lead research activities in the discovery of software vulnerabilities. The candidate will per-form research and analysis of security vulnerabili-ties associated with a wide variety of architectures to include x86, MIPS, ARM, amongst others, with emphasis on embedded software applications. Demonstrated experience with reverse engineer-ing tools, such as IDA Pro, the use of fuzzing techniques and tools, and understanding dynam-ic, symbolic and concolic software analysis are essential. The candidate must possess an expert command of C/C++ programming, with knowl-edge of assembly language for multiple architec-tures a definite plus.

Degree and Required Skills:• BS/MS degree in Computer Engineering, Com-

puter Science, Electrical Engineering, or a related field. Candidates currently enrolled in an MS ac-credited degree program relevant to this position will be considered. Possessing a PhD degree or being a candidate for PhD is preferred but not required.

• Experience with debuggers, code compilers and linkers, source code extractors, and disassem-blers for Windows, Linux, Android, Apple OS X and IOS platforms.

• Experience with software reverse engineering tools and methods.

• Experience with emulation such as QEMU, and code intermediate representation (IR).

• Ability to prepare technical reports and papers and to present findings to senior management and at technical conferences.

Primary Tasks:The successful candidate will assume the lead role in discovering vulnerabilities within identified appli-cations in support of a research team. He/she will

collaborate in identifying software patches and in the analysis of potential exploits which could have resulted from the discovered vulnerabilities.

In addition, the candidate will be integral to our current research efforts for automated vulnerability discovery with the application of Machine Learning (ML) techniques.Travel Requirements 10% - 25% travel.U.S. Citizenship required.

About the CompanyAspen Consulting Group Inc. (ACG) provides a wide range of services encompassing multiple technical disciplines. Our staff is recognized for our expert knowledge in Hardware Design and Fabrication, Software Development, Computer Network Oper-ations, Wireless Systems Engineering, and System Development, Integration and Testing. Aspen’ s corporate headquarters is located in Manasquan, NJ, one mile from the Atlantic Ocean. This position is located in Northeast Maryland, close to Balti-more and Washington D.C.

All benefits are paid by the company with no employee contributions!• Employer-paid medical, dental and vision plans• Paid Leave (Four weeks of vacation/sick time)• Employer-paid life insurance ($100K per

employee)• Paid prescription drug plan• Company funded Health Reimbursement

Account• 10 paid holidays per year• 401k Retirement Plan (Immediate vesting; em-

ployer matches up to 4% of employee’s contri-bution)

• Performance Bonus plan• Employee Referral Bonus• Employee tuition reimbursement

Please email resumes to Amy Russell [email protected].

Page 54: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

CAREER OPPORTUNITIES

52 ComputingEdge December 2019

CALIFORNIA INSTITUTE OF TECHNOLOGYThe Computing and Mathematical Sciences (CMS) Department at the California Institute of Technology (Caltech) invites applications for tenure-track faculty positions in all areas of applied mathematics, computer science, and related disciplines.

Areas of interest include (but are not limited to) algorithms, data assimilation and in-verse problems, dynamical systems and control, geometry, machine learning, mathe-matics of data science, networks and graphs, numerical linear algebra, optimization, partial differential equations, probability, scientific computing, statistics, stochastic modeling, and uncertainty quantification. Application foci include computational physical and life sciences, distributed systems, economics, graphics, quantum com-puting, and robotics and autonomous systems. The CMS Department is part of the Division of Engineering and Applied Science (EAS), comprising researchers working in and between the fields of aerospace, civil, electrical, environmental, mechanical, and medical engineering, as well as materials science and applied physics. The Institute as a whole represents the full range of research in biology, chemistry, engineering, physics, and the social sciences.

A commitment to world-class research, as well as high-quality teaching and mento-ring, is expected, and appointment as an assistant professor is contingent upon the completion of a PhD degree in applied mathematics, computer science, engineering, or the sciences. The initial appointment at the assistant professor level is four years and is contingent upon completion of a PhD degree. Reappointment beyond the ini-tial term is contingent upon successful review conducted prior to the commencement of the fourth year.

Applications will be reviewed beginning 15 November 2019 but all applications re-ceived before 15 January 2020 will receive full consideration. For a list of documents required, and full instructions on how to apply online, please visit https://applications.caltech.edu/jobs/cms. Questions about the application process may be directed to [email protected].

Caltech is an equal opportunity employer and all qualified applicants will receive con-sideration for employment without regard to race, color, religion, sex, sexual orienta-tion, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. The University at Albany is an

EO/AA/IRCA/ADA Employer.

Faculty Chair Position Computer Science Department

The College of Engineering and Applied Sciences of the University at Albany SUNY is seeking applicants for a full professor faculty position beginning September 2020 for Chair of the Department of Computer Science. The successful candidate will have an established record of scholarship with demonstrated potential to influence the growth and development of computer science programs at the University at Albany.

Applicants must have a PhD in Computer Science, Computer Engineering, Electrical Engineering, or a closely related discipline.

For a complete job description and application procedures, visit:

https://albany.interviewexchange.com/jobofferdetails.jsp?JOBID=117256

Questions regarding the position may be addressed to [email protected]

For additional information on the College and its departments, please visit: http://www.albany.edu/ceas/

IEEE Computer Society Has You Covered!WORLD-CLASS CONFERENCES —200+ globally recognized conferences.

DIGITAL LIBRARY — Over 700k articles covering world-class peer-reviewed content.

CALLS FOR PAPERS — Write and present your ground-breaking accomplishments.

EDUCATION — Strengthen your resume with the IEEE Computer Society Course Catalog.

ADVANCE YOUR CAREER — Search new positions in the IEEE Computer Society Jobs Board.

NETWORK — Make connections in local Region, Section, and Chapter activities.

Explore all of the member benefi ts at www.computer.org today!

Page 55: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

CALL FOR PAPERS

Special Issue on

Deadlines:

Topics of interest include:

Guest editor(s):

Submission guidelines: Contact the guest editor(s) at

IEEE Intelligent Systems

Learning Complex Couplings and Interactions

Submissions due: 30 March 2020First revisions due: 30 July 2020

The world is becoming increasingly complex with a major drive to incorporate increasingly intricate couplingrelationships and interactions between entities, behaviors, subsystems, and systems and their dynamics and evolutionof any kinds and in any domains. Effective modeling in complex couplings and interactions in complex data,behaviors, and systems directly addresses the core nature and challenges of complex problems and is critical forbuilding next-generation intelligent theories and systems to improve machine intelligence and understand real-lifecomplex systems. This special issue on learning complex couplings and interactions will collect and report the latestadvancements in artificial intelligence and data science theories, models, and applications of modeling complexcouplings and interactions in big data, complex behaviors, and complex systems. This special issue will solicit therecent theoretical and practical advancements in learning complex couplings and interactions in areas relevantto--but not limited to--explainable learning, representation learning, hybrid intelligent systems and problems,large-scale simulation, visual analytics and visualization, big data and complex behaviors, human-understandableexplanations, deep learning, and modeling applications and tools.

- Coupling learning of complex relations, interactions,and networks and their dynamics- Interaction learning of activities, behaviors, events,processes, and their sequences, networks, and dynamics- Learning natural, online, social, economic, cultural,and political couplings and interactions

computer.org/ex/author-information [email protected]

Dr. Can Wang (Griffith University, Australia)Prof. Fosca Giannotti (University of Pisa, Italy)

Final revisions due: 30 October 2020Publication: January/February 2021

- Learning real-time, dynamic, high-dimensional,heterogeneous, large-scale, and sparse couplings,dependencies, and interactions- Probabilistic and stochastic modeling of coupling andinteraction uncertainty and dependency

Prof. Longbing Cao (University of Technology Sydney,Australia)

Page 56: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

Keep up with the latest IEEE Computer Society publications and activities wherever you are.

Follow us:

stay connected.

| @ComputerSociety

| facebook.com/IEEEComputerSociety

| IEEE Computer Society

| youtube.com/ieeecomputersociety

| instagram.com/ieee_computer_society

Page 57: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

CALL FOR PAPERS

Special Issue on

Deadlines:

Topics of interest include:

Guest editor(s):

Submission guidelines: Contact the guest editor(s) at

IEEE Internet Computing

the Internet of Bodies (IoB)

Submissions due: 13 January 2020Revisions due: 13 April 2020

As healthcare solutions and augmented monitoring of human mobility overlap with the Internet of Things (IoT), anemerging area of interest leverages sensor networks that monitor personal health data and human activity. TheInternet of Bodies (IoB) effectively connects the human body to the Internet. The Internet of Sports (IoS) furtherdefines IoB for devices and software that monitor human activity in the fitness and competitive sports domain. Theadvanced capabilities and performance of these high-tech biological devices open a full array of opportunities andchallenges as healthcare professionals develop new medical solutions and assistive technologies. There is anincreasing number of IoB applications for personal mobile devices; there are also new and exciting devices, such assmart contact lenses for adapted vision correction, cochlear implants for enhanced hearing, and electronic pills thatmonitor internal organs. Internet computing, software engineering, human-computer interaction, and data andnetworking technologies are essential to creating next-generation IoB technologies. This special issue aims to collectthe most recent practical and theoretical advances to IoB, including cutting-edge techniques for system developmentand implementation, networking approaches, privacy and security considerations, and human-sensor interactions.

- IoB or sensor networks supporting human-orientedsensing/functions- Sensor networks for biological monitoring andenhancement- Security and privacy for IoB, IoT, and IoS

computer.org/ic/author-information [email protected]

M. Brian Blake, Drexel UniversitySchahram Dustdar, Vienna University of Technology

Final version due: 1 June 2020Publication: July/August 2020

- IoB/IoS-enabled innovation and entrepreneurship- IoB/IoS applications and services- Experimental results and deployment scenarios- Electronics and signal processing for IoB/IoS/IoT- Connectivity and networking

Nagarajan Kandasamy, Drexel UniversityXuanzhe Liu, Peking University

Page 58: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

72 December 2019 Published by the IEEE Computer Society 2469-7087/19 © 2019 IEEE

I EEE Computer Society conferences are valuable forums for learning on broad and dynamically shifting topics from within the computing profession. With over 200 conferences featuring leading

experts and thought leaders, we have an event that is right for you.

January13 January

• ICCPS (Int’l Conf. on Cyber-Physical Systems) ●

February3 February

• ICSC (IEEE 14th Int’l Conf. on Semantic Com-puting) ◗

18 February• SANER (IEEE 27th Int’l Conf. on Software Anal-

ysis, Evolution and Reengineering) ◗19 February

• BigComp (IEEE Int’l Conf. on Big Data and Smart Computing) ▲

22 February• CGO (IEEE/ACM Int’l Symposium on Code

Generation and Optimization) ◗

March2 March

• WACV (IEEE Winter Conf. on Applications of Computer Vision) ◗

9 March• DATE (Design, Automation & Test in Europe

Conf. & Exhibition) ●• IRC (4th IEEE Int’l Conf. on Robotic Comput-

ing) ▲

16 March• ICSA (IEEE Int’l Conf. on Software Architec-

ture) ★

22 March • VR (IEEE Conf. on Virtual Reality and 3D User

Interfaces) ◗23 March

• ICST (13th IEEE Conf. on Software Testing, Validation and Verifi cation) ●

• PerCom (IEEE Int’l Conf. on Pervasive Com-puting and Communications) ◗

April5 April

• ISPASS (Int’l Symposium on Performance Analysis of Systems and Software) ◗

14 April• Pacifi cVis (IEEE Pacifi c Visualization Sympo-

sium) ▲20 April

• ICDE (IEEE 36th Int’l Conf. on Data Eng.) ◗

May3 May

• FCCM (IEEE 28th Annual Int’l Symposium on Field-Programmable Custom Computing Machines) ◗

Conference CalendarQuestions? Contact [email protected]

Find a region: Africa ■Asia ▲

Australia ◆Europe ●

North America ◗South America ★

Page 59: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

4 May • HOST (IEEE Int’l Symposium on Hardware Oriented Security and Trust) ◗

18 May • SP (IEEE Symposium on Security and Pri-vacy) ◗

• FG (IEEE Int’l Conf. on Automatic Face and Gesture Recognition) ★

• IPDPS (IEEE Int’l Parallel and Distributed Pro-cessing Symposium) ◗

23 May • ICSE (IEEE/ACM 42nd Int’l Conf. on Software Eng.) ▲

30 May • ISCA (ACM/IEEE 47th Annual Int’l Sympo-sium on Computer Architecture) ●

June14 June

• CVPR (IEEE Conf. on Computer Vision and Pattern Analysis) ◗

16 June • EuroS&P (IEEE European Symposium on Security & Privacy) ●

19 June • JCDL (ACM/IEEE Joint Conf. on Digital Librar-ies) ▲

29 June • DSN (50th Annual IEEE/IFIP Int’l Conf. on Dependable Systems and Networks) ●

30 June • MDM (21st IEEE Int’l Conf. on Mobile Data Management) ●

July6 July

• ICME (IEEE Int’l Conf. on Multimedia and Expo) ●

8 July • ICDCS (IEEE 40th Int’l Conf. on Distributed Computing Systems) ▲

13 July • COMPSAC (IEEE Annual Computer Software and Applications Conference) ●

August31 August

• RE (IEEE 28th Int’l Requirements Eng. Conf.) ●

September21 September

• ASE (35th IEEE/ACM Int’l Conf. on Automated Software Eng.) ◆

28 September • ICSME (IEEE Int’l Conf. on Software Mainte-nance and Evolution) ◆

• SecDev (IEEE Secure Development) ◗

October18 October

• MODELS (ACM/IEEE 23rd Int’l Conf. on Model Driven Eng. Languages and Systems) ◗

21 October • FIE (IEEE Frontiers in Education Conf.) ●

25 October • VIS (IEEE Visualization Conf.) ◗

November9 November

• FOCS (IEEE 61st Annual Symposium on Foun-dations of Computer Science) ◗

15 November • SC ◗

16 November • LCN (2020 IEEE 45th Conf. on Local Computer Networks) ◆

Learn more about IEEE Computer Society Conferenceswww.computer.org/conferences

ce12con(all).indd 73 10/30/19 9:41 AM

Page 60: > Visualization > Robotics > Autonomous Vehicles > Edge ... · Edge Computing 43 A Disaggregated Cloud Architecture for Edge Computing -RAFAEL MORENOVOZMEDIANO, EDUARDO HUEDO, RUBEN

Publish your work in the IEEE Computer Society’s fl agship journal, IEEE Transactions on Computers (TC). TC is a monthly publication with a wide distribution to researchers, industry professionals, and educators in the computing fi eld.

TC seeks original research contributions on areas of current computing interest, including the following topics:

• Computer architecture• Software systems• Mobile and embedded systems

• Security and reliability• Machine learning• Quantum computing

All accepted manuscripts are automatically considered for the monthly featured paper and annual Best Paper Award.

Learn about calls for papers and submission details at www.computer.org/tc.

Call for Papers:

IEEE Transactions

on Computers