22
An Internet of Things approach to domestic noise Guillermo Garc´ ıa HCI MSc Final Project Report [2017] UCL Interaction Centre, University College London Supervisor: Enrico Costanza Figure 1. User journey of our IoT system for domestic noise. Step 1: Connecting base station to the internet. Step 2: Synchronize and configure the sensor node. Step 3: Wait 24 hours for the data to show in the tablet ABSTRACT Urban areas are the nerve centres of today’s world; as the density of population increases the closer we live to each other and– sometimes involuntarily– the more we become noticeable to others. Cohabitants under the same roof have to cope with sound pressure coming from flatmates and neighbours often leading to tensions and disagreement. Starting from an original idea of the researcher, we studied how domestic noise could be approached by designing a novel data visualisation to which users can refer in order to make sense of the noise they force onto other members of the house- hold. We attempted to outpace the challenges of designing an Internet of Things (IoT) system that could be installed at home, worked out of the box with the minimum configuration possible, and supported our visualisations with data. The IoT system was deployed in two flats sited in suburban areas of London. The data visualisation showing the amount of sound pressure generated by participants during the field study and the usability of the system were the focuses for evaluation MSC HCI-E FINAL PROJECT REPORT Project report submitted in part fulfilment of the requirements for the degree of Master of Science (Human-Computer Interaction) in the Faculty of Brain Sciences, University College London, [2017]. NOTE BY THE UNIVERSITY This project report is submitted as an examination paper. No responsibility can be held by London University for the accuracy or completeness of the material therein. of our work. Qualitative data results extracted from exit in- terviews and the researcher’s observations backed our initial assumption that our five participants found the noise visualisa- tion both informative and helpful in better understanding how their daily activities impact domestic noise. ACM Classification Keywords H.5.m. Information Interfaces and Presentation (e.g. HCI): Miscellaneous Author Keywords Internet of Things (IoT); domestic noise; sound pressure; in the wild; data visualisation. Msc Contribution type Design INTRODUCTION Sound nuisance control and arbitration in London is managed independently by each borough council. As an example, West- minster borough noise complaints tally an average of 17,000 per year with other boroughs, such as Hackney or Tower Ham- lets, following closely. In residential areas, where people live in closer proximity due to the high density of flats, around half of all the investigations carried out by the Westminster coun- cil relate to noise generated by neighbours shouting, barking dogs, loud music or televisions, or Do-It-Yourself (DIY) activ- ities; we refer to this throughout the document as DIY noise.

An Internet of Things approach to domestic noise

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

An Internet of Things approach to domestic noiseGuillermo Garcıa

HCI MSc Final Project Report [2017]UCL Interaction Centre, University College London

Supervisor: Enrico Costanza

Figure 1. User journey of our IoT system for domestic noise. Step 1: Connecting base station to the internet. Step 2: Synchronize and configure thesensor node. Step 3: Wait 24 hours for the data to show in the tablet

ABSTRACTUrban areas are the nerve centres of today’s world; as thedensity of population increases the closer we live to each otherand– sometimes involuntarily– the morewe become noticeable to others. Cohabitants under the sameroof have to cope with sound pressure coming from flatmatesand neighbours often leading to tensions and disagreement.Starting from an original idea of the researcher, we studiedhow domestic noise could be approached by designing a noveldata visualisation to which users can refer in order to makesense of the noise they force onto other members of the house-hold. We attempted to outpace the challenges of designingan Internet of Things (IoT) system that could be installed athome, worked out of the box with the minimum configurationpossible, and supported our visualisations with data. The IoTsystem was deployed in two flats sited in suburban areas ofLondon. The data visualisation showing the amount of soundpressure generated by participants during the field study andthe usability of the system were the focuses for evaluation

MSC HCI-E FINAL PROJECT REPORTProject report submitted in part fulfilment of the requirements for the degree of Masterof Science (Human-Computer Interaction) in the Faculty of Brain Sciences, UniversityCollege London, [2017].NOTE BY THE UNIVERSITYThis project report is submitted as an examination paper. No responsibility canbe held by London University for the accuracy or completeness of the material therein.

of our work. Qualitative data results extracted from exit in-terviews and the researcher’s observations backed our initialassumption that our five participants found the noise visualisa-tion both informative and helpful in better understanding howtheir daily activities impact domestic noise.

ACM Classification KeywordsH.5.m. Information Interfaces and Presentation (e.g. HCI):Miscellaneous

Author KeywordsInternet of Things (IoT); domestic noise; sound pressure; inthe wild; data visualisation.

Msc Contribution typeDesign

INTRODUCTIONSound nuisance control and arbitration in London is managedindependently by each borough council. As an example, West-minster borough noise complaints tally an average of 17,000per year with other boroughs, such as Hackney or Tower Ham-lets, following closely. In residential areas, where people livein closer proximity due to the high density of flats, around halfof all the investigations carried out by the Westminster coun-cil relate to noise generated by neighbours shouting, barkingdogs, loud music or televisions, or Do-It-Yourself (DIY) activ-ities; we refer to this throughout the document as DIY noise.

A CityWest Homes survey found that noisy neighbours rankthird in the list of problems reported by their clients. EuropeanUnion authorities require their country members to developa standardised action plan to address the issue in urban areas.However, the EU does not set limits for domestic sound levels,but allow this maximum to be established by local authorities.Barcelona sets ceilings of 60 decibels (dB) during day hours,and 50dB at night, which is the equivalent of a normal con-versation at home. Also, the sharing economy is substantiallyaffecting Barcelona, with neighbours complaining about anti-social behaviour in general and noise in particular, which putsat risk peaceful coexistence in buildings, but often it’s just amatter of the tourists not being aware of the existence of rules[34].

Under the umbrella above described new design opportunitiesemerge for researchers to try to understand better the problemof household noise seen through the HCI prism. Our moti-vation is to answer whether it is possible to reflect on boththe sound pressure we are exposed to, and that which we areforcing onto others in their own homes.

Sound is often perceived as noise when it is unwanted orhas no perceived value. Noise perception is subjective and itis determined by the acoustic environment surrounding thelistener, plus their personal associations with the sound [42].Unwanted higher intensity sound is more likely to be perceivedas noise, this characteristic of sound is measured in decibels(dB) which is a logarithmic unit which compares a value witha relative value which is the reference. Sounds like a babycrying can be as loud as 110dB, while the sound intensity aTV set normally produces sits around 70dB.

While most prior work looked at noise from the point of viewof outdoors and city-level noise and ”noise made by others”,this project looks at domestic noise and aims to help peopleto reflect upon and make sense of the noise they make. Basedupon the aforementioned information, it seemed logical todevelop a data visualisation which displayed quantified soundin dB. Data visualisations help in identifying trends, patternsand unusual occurrences in datasets [51]. Although the focuswas not on the identification of noise sources, we tried totake into account DIY noise and the identification of noisegeneration patterns, which impact on the overall noise withinthe household. Previous ”in the wild” studies have shownthat users find it difficult to make sense of the data if theydon’t perceive the data to be correct [4, 23], so that our effortshave aimed towards users assessing their personal noise dataas correct. The phases of user requirements gathering, dataanalysis, and prototyping and evaluation of the visualisationdesign are reported in Part I of this document.

As we were planning the deployment of an IoT system, werealised one big challenge was setting up a system that cancover all the rooms of a given household to sense the environ-ment, which can be a daunting task. Wireless communicationbetween senders and receivers is an important barrier to over-come and battery expenditure optimisation a considerable chal-lenge during the design process of an IoT system with thesecharacteristics. Research has given evidence that modern IoTdevices left to their own users hindered sustained engagement

Figure 2. Final artifacts of our IoT system for domestic noise

when users had technical difficulties configuring them [4]. Weenvisioned our system so it could be installed unsupervisedand worked ”out of the box”, minimising the setting up of thesystem for the final user. This is one of the hurdles addressedin this project with the help of interaction design techniques.The evaluation of the interaction threw promising results andpaved the way to keep refining the project’s final prototype inthe right direction.

The system and infrastructure element of this project is builton hardware and software developed in previous projects, likethe Temperature Calendar [12], and similar parallel ongoingresearch. The Temperature Calendar project also had to dealwith wireless communications between off-the-shelf sensorsand a central station; its outcomes were useful but we alsoadopted important changes to adapt to the particular techni-calities of our study. Temperature sensors were replaced bydigital microphones, and an upgraded version of the micro-controllers was used; we also reused the back-end code fordata persistence and hardware communication. We believethe work we did in this area signifies a technical contributiontowards designing a functional audio sensible IoT system athome.

There are two major contributions of this project to the fieldof HCI. From a strictly technical point of view, we created aprototype for an IoT system deployed in the wild around audiosensing and we reported on how we identified and tried tosolve the challenges ascribed to engineer and design it for aneasy configuration by the final user. It is also worth mentioningthe knowledge acquired during the design and evaluation of adata visualisation to reflect on DIY noise in an effective way;we can only hope researchers and practitioners will be able tobenefit from reading about our successes and failures in thisfield.

IoT systems need to be evaluated in the context they weredesigned for, as their success strongly depends upon how wellthey support users with tasks in situ. The evaluation of theinteractive prototype was carried out with participants during aweek long in-the-wild study. The system was deployed in two

private homes and evaluated at the end of the study, resultingin the gathering of qualitative data which, after analysis, gaveus positive insights about the effect of our interventions in theprocess of reflecting on DIY noise and its direct implicationsfor the HCI community.

We adopted a user-centred design (UCD) approach to supportthe stages during the life cycle of this project. The phases ofdata gathering, analysis of requirements, prototyping, refine-ment, and evaluation are documented in this report. We optedfor separating this document into two parts due to the natureof our study: our motivation in learning about human factorsrelated to noise and the search for a convenient deploymentof an IoT system in the wild. In Part I the reader will find adescription of the design process followed to understand theproblems of domestic noise, analyse it, and design a visualisa-tion which supports users to reflect on noise. Part II is moretechnical and is devoted to illustrate the design process of thesystem supporting the visualisation designed in the previoussection. Expert reviews delimited the boundaries betweeniterations in both parts.

RELATED WORKSensing audio has been used previously in HCI with diversemotivations. Sometimes as a way of inferring informationregarding bathroom activities, proving easy to analyse record-ings to classify the sources of the sound and also raising unan-swered privacy concerns [10]. In other occasions, researchersemployed audio sensing to measure water consumption andevaluate persuasion of ambient displays versus more descrip-tive and ”hard on data” displays [32]. Audio sensors areinexpensive and easy to install in clever ways to cover theareas of the house we are interested in for studying [19].

Literature illustrates several participatory based sensing solu-tions to measure noise pollution. Stevens et. al. designed amobile application so users could build a noise map of cities,but users found using their phones not very practical, as theactivity of sensing interfered with normal use [46]. A moresuccessful attempt, despite being difficult to set up by the nottech-savvy, emerged on the work by Ballestrini et al. [4] report-ing on Smart Citizen; unsupervised sensing devices, placedin strategic places, allow for a better crowdfunding sensingexperience and results as accurate. Smart Citizen has alsobeen used in later studies to research on the potential of notconventional ambient displays at home to make sense of airquality, humidity, noise and temperature data [23].

Several studies on noise measurement and public share ofspaces have been conducted before. Wozniak et al. [49], inpart sharing some of our motivation, experimented the effectsof an ambient display installed in a study room and visibleto everyone on the overall levels of quietness. Their resultsshowed that mostly all noise registered was coming from peo-ple entering and leaving the room, a predictable outcome fora location where rules of quietness are implicit. Our researchput its eyes on noise at home, which is also a shared spaceand a place where potentially noisier activities will be held,but it is not always clear the level of noise tolerated by others.Concerning design to mediate between neighbours in noise

related disputes, there has been some attempts but providinglittle empirical evidence [35].

There is a large body of previous work about sound visuali-sation. These have sometimes been motivated by improvingaccessibility for the hearing impaired [3] or by empoweringmusicians [18] and singers [48] with tools to fix the deficien-cies in the sounds they produce. An interesting approach todiscovering patterns in sound data visualisations is reported byCarlis et al. [8]; in their work, they illustrate how it proved use-ful for researchers to see sound data extracted from recordingsplotted in a spiral form, thus recordings could be shortened orlengthened without incurring loss of quality, where the datalines up due to no sound occurring or sound canceling out.Our work reveals the big picture of domestic noise by liningup sound data where relevant.

In general, researchers proved ambient displays are more use-ful than informative ones when the goal was to promote sus-tainability [31, 32]. However, Kuznetsov et al. [32] suggestedas implication for design that informative displays allow usersto define acceptable ranges and to inspire analysis about theireveryday activities. In line with this, we believe that a moredescriptive visualization of noise occurrences will contributebetter to reflection, as it has been proved in the past in similartechnology deployments [12].

Costanza et al. suggest a mix of off-the-shelf sensors andcurrent web technologies in home settings [13] and in workenvironments where policies are agreed collectively [12]. BothIoT solutions expect the user to reflect on energy consumptionby showing direct causal-consequence relationships in theformer study, and levels of temperature comfort in the latter.These studies set the technological foundations of this project’strial in the wild. We can learn from [13] that is possibleto design visualizations that ”engage users with their databeyond mere representation”. The taking home point of [12] isthe relationship between policies and engagement more thanputting the focus on motivating the individual.

Orchestrating devices to work under the same network andconnect to remote ones normally involves a hidden complexityof wires[29] that could and should be simplified by boostingthe devices with wireless capabilities. This, however, comes atthe price of over complicating the configuration with authenti-cation steps to ensure data confidentiality and integrity [45].In an attempt to simplify wireless connection between IoTdevices, Jewel et al. [29] explored four different strategies toeasily pair devices in a Wi-Fi infrastructure. We can take fromthe latest literature that it is important to design IoT deviceswith minimum user interfaces that easily connect with eachother, in an attempt to bring the motto of plug-and-play closerto the IoT.

PART I: VISUALISATION

User ResearchWork on the user research phase commenced with some back-ground research, semi-structured interviews and a small auto-ethnographic study by the researcher. We thought importantto focus on the dimension of the problem at a personal level,the public figures, and how the authorities mediate between

parties to commence this stage of discovery so we reviewedseveral officials reports available online.

To allow a deeper insight the researcher interviewed a repre-sentative of a borough in London, head of a team in chargeof dealing with noise complaints. In this case, domestic com-plaints don’t get investigated unless the source of the noiseis identified as some piece of faulty equipment. In addition,a set of informal interviews with people who live in sharedaccommodation were conducted; these interviews were aboutthe ways in which noise nuisances are perceived at home.Important insights from this initial set of interviews were:

• Users sometimes don’t bother to complain and simply toler-ate the noise.

• They are mainly concerned about noise when they are athome as they see it as somewhere to be tranquil most of thetime.

• They see noise a problem but not everyone perceives it tohave the same level of importance.

• Interviewees felt embarrassed when they were asked to keepthe noise down.

• Mostly they don’t know whether their lifestyle is a nuisancefor others or not.

Finally, the researcher underwent a small-scale auto-ethnographic study reflecting on past experiences with domes-tic noise, reported in the literature as a method to understandthe cultural experience from analysing the personal one [17].

All the data gathered was translated into user requirements forour data visualisation which can be resumed in the followingthree:

• To allow people to identify the times of the day when theymake noise.

• To identify recurring patterns that define their lifestyle andlead to making noise

• To raise awareness of when they are undergoing noisy ac-tivities.

PrototypeThe nature of the project calls for a high-fidelity prototypeas the evaluation will extend over time, in the wild, and ithas to be based on real data that heavily depends on the im-plementation. It would not be wise to evaluate a novel datavisualisation with the purpose of reflecting on one’s DIY noisewith mock-up data that does not represent with accuracy realevents, Genov et al. [20] suggest using real data to test pro-totypes reduces the artificial nature of the lab, reduces users’cognitive load and makes users more emotionally engagedwith the situation.

1st ITERATION - Sound data and Initial sketchesUnderstanding sound and sound dataAfter analysing the findings and establishing the user require-ments we could reformulate the design problem, specify the

Figure 3. Amplitude of sound waves and how they map to sound volumewidgets

scope of our solution, and foresee how to shape the eventualevaluation.

Before putting our ideas on paper, we had to study what soundis and which quantitative features from sound were interest-ing for our study. We accept that nowadays most users arefamiliar with what the volume of a sound is because we see itrepresented all the time when we listen to an audio file withan electronic device. What determines the loudness or volumeof a sound wave is its amplitude in a directly proportionalrelationship, a bigger amplitude means extra loudness. Weusually see sound volume as an abstraction of the smallest andhighest amplitude of a given sound wave (Figure 3) Volumecontrol widgets are mapped to the lowest and highest ampli-tude the device at hand can transmit to the air, thus regulatingthe volume. Amplitude is also directly related to the intensityof a sound which is the energy carried by a sound wave; fromhere, the Sound Pressure Level (SPL) is a logarithmic mea-sure that represents the deviation in energy from measuringthe amplitude for a given sound and the amplitude of a targetsound (Equation 1), SPL is measured in decibels (dB).

Lp = 20log10

(pp0

)in dB (1)

Visualizing sound dataUsually audio is displayed in the form of spectrograms high-lighting the frequencies of a sound, which we judged as nothaving too much meaning for us, or audio signal plots, display-ing sound amplitude against time. It was decided to plot thedata against time in the form of a time series graph becausesound data is continuous, as it is temperature for example, andconstantly changes over time. Cartesian graphs do a good jobof facilitating the elementary perceptual task of judging theposition of a data point along a common scale [11] and wewere trying the user to easily convey quickly changes in soundreadings. The most common thing a reader will look at in atime series graph is trends in the data (i. e. it is increasingor decreasing) so it is important to highlight the interestingparts so users know where to look at [50]; this is telling usthat we need to discern when a sound reading is relevant to beanalysed; in other words, when a certain sound is high enoughto be susceptible to classification as noise.

In Figure 4 we can see a first sketch of how we thought thevisualisation could look The sound data is plotted against atimeline representing a whole day of the week. This is whatwe called the Yesterday View. At first to represent whether

Figure 4. Initial sketch of Yesterday View

a sound data point is considered noise or not we thoughtof encoding this information depending on the intensity ofthe sound, splitting the surface of the graph in three bandsrepresenting low, medium and high sound. Colour codes(green for low, blue for medium, and red for high) in the dataplotted would represent the belonging of the data point to acertain band; we also thought of the possibility of modifyingthe curvature of the line resulting of linking all the data pointsto convey sound level, this idea was further developed.

Yau et al. stresses the importance of storytelling of data visu-alisations as a way to inform, entertain and/or persuade thereader; also they believe in storytelling as a starting point to-wards finding patterns and relationships in the data [50]. Oncesound data started flowing in the system, and after a secondday of recording data, comparisons between consecutive dayswould be an option and patterns could arise in the data. Withthe sketch in Figure 4 in mind, and after a session of ideation,a novel visualisation was designed. This visualisation consistsof stretching out the line plotted in the graph in Figure 4 foreach day of the week, then plotting all the seven lines as con-centric rings in hope it would be easy to find patterns in thedata which occurred at certain points of the day (Figure 5).This visualisation would be contained in the Summary View.No interaction with the visualisation was designed at this time.

An Expert Review with the supervisory team evaluated thenovelty of the visualisations, and their appropriateness in ad-dressing the user requirements we previously set as a target.On paper, the solutions and the concepts proposed were agreedsufficient to progress with them into a second iteration. Itlooked necessary to see real data plotted in this manner to fur-ther evaluate if the visual encodings worked as we envisioned.

2nd ITERATION - First plotsWorking with real sound dataAt the end of the previous iteration, it was decided to gatherreal sound data and validate the design based on how it looked.Sound data for 12 hours was gathered and, during the process,the researcher annotated systematically the different activitiesthat occurred while the hardware was recording at home (i. e.listening to music, watching television, a dinner with friends,

Figure 5. Initial Sketch of Summary View

a work-out session,. . . ). The minimum amount of code waswritten to load and plot the data with D3. D3 is a popularlibrary built in Javascript (JS) and used to produce powerfulvisualisations based on data using HTML code standards. Asmany other JS libraries, D3 adopt its own paradigm to manipu-late elements in the domain of a typical HTML document andto allow developers to design the interaction, the dynamics,and the look and feel of data visualisations.

Figure 6. 12 hours of raw sound data in dB plotted with D3

We were surprised by the variability of the data given its gran-ularity (one reading per minute tallies to 3600 data points ona single day), making our early sketches nowhere near accu-rate (Figure 6 vs Figure 4). The possibility of smoothing thesignal using a moving average function was suggested andimplemented, but the resulting graph was also not close toour sketches and high noise data points appeared shifted intime which invalidated the data. In Figure 7 a peak functionis applied to the initial graph; peak detection is used in datavisualisations to identify points of interest by searching fora local maximum over a certain threshold [14]; this was alsopart of our exploration in the search for a more visually ap-pealing result without losing the meaning of the data. Thepeak function allows for a better observation of blunt changesin the data (when noise starts and stops) and looks closer towhat we had in mind, but it also conceals almost all the datain the no noise area.

Figure 7. Peak function applied over data on Figure 6

Plenty of online resources can be found classifying sourcesof noise depending on their intensity [1, 2]. To our eyes andderived from the initial set of findings, this made classificationof what is considered noise by one person easier; certain dBreadings could map the mental model of users regarding noisewith the colour classification of low or no noise for green,medium or normal noise for blue, and high or too much noisefor red. A poor match between a user’s mental model and thevisual encoding imposes extra cognitive load on the user [39],so we decided to go for displaying dB on the y-axis of ourtime series graph.

Two HCI students were asked their opinions of our prototypesof the Yesterday View and agreed with HCI experts the visu-alisation of the raw data is the most accurate to reality so wedecided to avoid any post processing in the data and progressto the next iteration with the visualisation in Figure 6.

Another key decision in this iteration was to reject the ini-tial idea of changing the sharpness of the line in the graphdepending on noise threshold. The implementation of thisfeature exceeded the amount of work required for such anearly prototype and the use of different colours had the sameinformative value already.At the same time, we built uponthe ideation of the Summary View. As an honest attempt toraise user awareness about how much time during the day theyspent on noisy activities, we ideated a pie chart to representin a visual way the percentage of time when noise is beinggenerated by them. We also suggested the possibility of thisnoise breakdown to be split in three categories: time sleeping,time at work, and time at home. By analysing this overviewone could reflect on a bigger picture of DIY noise (Figure 8).

3rd ITERATION - Final designAdding annotations to the visualizationAnnotations have been used in other studies to highlightepisodes in a data set [13]. By examining several examples ofour sound datasets and how they looked once plotted with ourvisualisation, we were able to pinpoint what defines an ”event”to be highlighted. We’ve talked before about the continuouscharacteristic of sound, sometimes blunt spikes in the datacould be caused by sudden noise which do not persist andmight be difficult to remember because they are not prolongedin time. On the other hand, when we see a concentration ofloud sound data recorded during a period of time, it may resulteasier to remember that a particular noisy activity happenedand generated noise for a while. This was in line with our ini-tial aim to allow users to reflect on the times of day when theymake noise and also to identify patterns in their noisy activi-

Figure 8. Sketch of the Summary View (2nd iteration)

Figure 9. High fidelity prototype of the Yesterday View. (Displaying realdata)

ties. In this iteration we put the focus on adding annotationsto our prototype.

To quickly prototype the feature, we used the library D3-annotations which is built using D3. The algorithm to supportthis feature consisted in iteratively searching the data for highnoise occurrences (in the red area) in periods of less than 20minutes, an arbitrary value (Figure 9). The annotation sur-rounds the area which we want to highlight and labels it withthe time of the first and the last loud noise occurrence andan average of the highest readings within said period. Some-times annotations collide with each other and may be difficultto read, we agreed that collision detection was a refinementbeyond the scope of this project, so we opted for allowingusers to manually arrange annotations when they are touched.Finally, we added labels next to the axes and a legend to ensureusers understand the colour visual encoding and the reflectiontask is not affected [50].

A high fidelity prototype for the Summary View was alsogenerated. The work we did on the previous iteration to tryto smooth the data of the Yesterday View came in handy toplot the seven-days breakdown using the concentric rings. Thedata of previous days was simplified using a peak function;each of the rings is a pie chart with a cut-out in the middlewhich radius is adjusted automatically depending on how farin time the day which references is. The right hand side of

this view is taken by a normal pie chart which implementationis straightforward with the D3 library. We decided to presentthis summarised data in the form of an infographic, which is apopular way of presenting information in a visual manner andmake data sets coherent (Figure 20).

PART II - IOT AND IN THE WILDA part of the design decisions in this section relate to thedesign of the visualisation. However, as for the realisation ofthis project sensors are needed, IoT systems are a key topicof this project and this deserves its own section. Here wedocument the technicalities of our IoT system constructionand the interaction design we ideated to support its installation,configuration, use, and evaluation.

HardwareThe hardware to support an IoT system for audio sensing hasto be reliable. The minimum viable hardware is a sensor nodecomposed of a microphone attached to a microcontroller toprocess the data, and a base station that receives the data sentfrom the node (Figure 10).

Figure 10. Left: Computing units of our system. Right: Sketches forsensor nodes (Microphone + Microcontroller)

Microcontrollers and communicationsMicrocontrollers are essential within the IoT ecosystem; theyare more or less affordable depending on the model, smallenough to fit in multitude of projects, and their performanceand energy efficiency increases with each new version released.We were familiar with the Adafruit Feather family platformfrom other current IoT in-the-wild studies currently ongoing.These development boards come in a lot of flavours, and cer-tain models include communication chips (WiFi, Bluetooth orradio) to allow for wireless networking. An external LiPo bat-tery can also be attached to the designed-for-purpose connectorbuilt in on the board. The two models we experimented withwere Adafruit Feather 32u4 [24] and Adafruit Feather M0 [25],both versions come fitted with a RFM69HCW-433MHZ chip[16]. The 32u4 feather offers an ATmega32u4 chip clockedat 8MHz which seems adequate for most portable projects.In its core, the M0 feather version hosts a ATSAMD21 Cor-tex M0 chip which is the same included in the Arduino Zero[28] and it is a considerable departure from the ATmega32u4chip series, running a much faster clock and including moreRAM and Flash memory. Although a priori used for similarpurposes, both 32u4 and M0 versions run on a completelydifferent architecture so Adafruit makes a considerable effort

to make all the most popular libraries compatible with bothversions of the board.

MicrophonesThe task of audio sensing is not trivial; each microphone modelavailable commercially offers features or requires of extrahardware to successfully be considered suitable for certaingoals. The most appropriate audio hardware was chosen onthe criteria to better suit the objective of sensing environmentalaudio with the least additional work possible. During the stageof background research, we contemplated the possibility ofwearable devices versus fixed-to-objects devices to fulfil ourneeds. Energy expenditure was an important factor to takeinto account, as was the communication between devices.

As mentioned previously, audio has a continuous magnitudewhich can go up and down very quickly in a short periodof time. Registering these changes is very valuable to us.Dragging furniture or a sudden slam of the door, for example,are DIY noise events that can last only for a few seconds orfractions of seconds and would certainly cause disturbanceto some people. From the energy efficiency point of view, adevice which is continuously reading input from the environ-ment is not battery friendly. There is an important trade-off inthis scenario and we had to carefully consider our options.

Attaching a microphone to a wearable device was discussedat one point, but we reasoned that this could be a burden forthe user who may already wear another device or simply notlike the idea of this kind of technology. In addition, the sourceof DIY noise might not always be close to whoever triggeredit (e. g. A music player turned on in a distant room). Thisand the fact that the size of the battery could not be estimatedforced us to abandon the idea at an early stage.

There is a myriad of microphones available commercially. OurIoT system had to sense audio in each individual room of agiven household; to choose the most suitable microphone, itwas important to learn about the performance of individualhardware by putting it to the test. We were advised by a techsavvy audio engineer about two types of microphones manu-factured by Adafruit, an analog microphone [27] and its digitalcounterpart [26]. For the first prototype, we ordered each ver-sion and tested each of them paired with a microcontroller ona breadboard; both models need of being calibrated with anexternal Sound Pressure Level meter before they start givingaccurate results in dB.

The MEMS analog microphone works by detecting soundand converting it to voltage; wiring it to a microcontrolleris an easy task because it only needs to be connected to ananalog input, GND, and voltage pins. The audio waveformis obtained later with software. This kind of microphonedid not perform well in our initial tests when paired withthe mega32u4 microcontroller; out of the box, the sourceof sound needed to be very close to the microphone and itseemed necessary to either attach an external amplifier to themicrophone or build one with a combination of resistors andcapacitors, a cheaper option but one which would need to befine-tuned depending on the placement of the microphone andthe physical layout of the room where the sensor would rest.

Our experiments with the MEMS digital microphone lead tomore fruitful results. These microphones nullify the noise thatcan seep with the analog version and convert sound readingsto digital values directly. The wiring requirements add threemore connections to the microcontroller; this is for driving thedata through the I2S bus, exclusive of certain kind of modernmicrocontrollers. While the analog microphone is compatiblewith both versions of Adafruit feathers we used for prototyp-ing, the digital version is only compatible with the Cortexseries as the 32u4 lacks I2S circuitry. This microphone provedto be more sensitive and was able to detect small variations insound from sources at the opposite side of the lab. Therefore,the combo digital microphone + Feather M0 was chosen asthe way to go for the final prototype.

CommunicationsA possible network layout configuration of our system appearsdetailed in Figure 11. The sensor nodes (microcontroller plusmicrophone) are placed in the rooms which are not sharedin the household, a base station connected to the internet sitssomewhere in a shared room (e. g. living room, kitchen,lobby,...). The communication between the sensor nodes andthe base station happens wirelessly through radio signals. Fi-nally, each member with a sensor node installed in their roomwould use a tablet device to load the data from the base station.

Figure 11. An imaginary floorplan of a flat, domestic noise IoT systemlayout and hardware requirements

Both versions of the development boards mentioned beforeboast the same radio chip for wireless communications. Thesechips offer a working range of approximately 350 meters,depending on obstruction, working frequency, antenna power,and power output. Walls considerably hinder the range thesechips can reliably operate and obstacles need to be taken intoaccount. The trade-off here is in the energy efficiency of theradio modules; they can send information using very littlepower. The data carried over the air is in a packetized formand these packets can contain up to 65 Bytes of informationonly so audio streaming is not an option, meaning that anyaudio processing necessary has to be done on the sensor node’smicrocontroller. At the receiver side we find a Raspberry Pi 3Model B with the same a stand-alone RFM69HCW chip wiredas in the following table: https://github.com/etrombly/RFM69.

Orchestrating traffic in a radio network was not easy. First,receiving packets is the bottleneck when there are many sen-sors trying to send packets to the same receiver. Then, sendingpackets to the network each time the microphones read a valueis not efficient nor wise because sending/acknowledging pack-ets take almost a second and there’s a risk of flooding thenetwork. It was decided decibels to be the audio magnitudeto represent in our visualisation, so we added the calculationof dB within the code running on the sensor nodes, describedin ”understanding sound and sound data” (Equation 1). Wereduced network traffic considerably by only communicatingto the base station the highest dB reading during a period of aminute, then start listening again. We think this was a good so-lution with a minimal trade-off for the purpose of the project;it allowed us for a great control over the communications, andsudden noises would still be picked up, with the caveat that wecannot specify the duration of a sound below the one-minutemark.

PoweringIn IoT projects, the energy is always a scarce resource to behandled with care, minimising the energy spent during pro-cesses like communication and computing is a primary con-straint; also, the need of changing batteries from sensor nodesoccasionally signifies a big obstacle to the widespread devel-opment of IoT [21]. It’s for this that we wanted to excel inminimising the energy consumption of the sensor nodes with-out sacrificing necessary computing resources which couldlead to the loss of important audio data for our purpose.

We did not achieve good results in this area. To be able tocapture sudden noises the microcontroller must work con-tinuously. Experience told us that microcontrollers can beput to sleep for small fractions of a second, wake it up, andstart listening through the microphone. Adafruit’s library forsleep/waking up these feathers worked fine for the 32u4 boardsbut, in our case, we found the M0 feathers did not wake upafter some random time operating on sleep/wake up cycles(https://github.com/adafruit/Adafruit_SleepyDog). A lot ofeffort was put on this feature to work and fine tune to max-imise battery life but we were not able to ascertain why thiswas happening; we even tried other libraries to no avail either.The best results we got, using a standard LiPo 2000mAh, werea little more than one full day for a single sensor node; clearly,not ideal. We opted, in the end, for plugging the sensor nodesdirectly to the current for the length of the study as runningthe project on batteries was not viable for now.

Hardware - PrototypeEnclosureWe designed a box as the enclosure to accommodate all thehardware for the sensor nodes. This box would include thefeather board, the digital MEMS microphone, a pull-downbutton, a LiPo 2000mAh battery, and two LED diodes. Thefirst sketches for the enclosure can be seen in Figure X. For thefabrication of the enclosure, we laser cut a 3mm plastic acrylicsheet; the box comprises of ten layers of acrylic convenientlycut in the centre to make room for all the electronics inside;no particular reason drove us to choose the orange colour forthe acrylic. The ensure none of the layers moved we joined

Figure 12. Result of the digital fabrication of the sensor nodes.

them using 4 screws with 4 nuts built in the last layer (Figure12). The feather is screwed to two layers of the box thus itstays still while connected to the current. At one side of thebox there is a hole for the radio antenna which measures 16cmand for connecting the miniUSB charging port to the exterior;four holes at the front of the box are cut to measure to allowthe microphone to measure environmental audio and to makeroom for the LED lights and button actuators. For the in thewild evaluation of our prototype we fabricated a total of 3enclosures.

SoftwareInteraction DesignThere are some practical issues in the application of any inter-action design project based on UCD principles [43], some ofthese are:

• Who are the users? Our users are those who live in sharedaccommodation and can be owners or tenants of the prop-erty. For stakeholders we can also include those who own aproperty but don’t necessarily live in it, they could benefitto some extent from the installation of our system in theirproperties.

• What are their requirements? In Part I we specified someof the user requirements in relation with the data visual-isation we designed. The user requirements for our IoTsystem, which supports the visualisation, depart from theseuser needs and point towards usability and user experiencegoals and functional requirements. Some were found dur-ing the data gathering phase while others appeared duringbrainstorming sessions. To summarise these new set ofrequirements:

– Design a system as usable as possible to support ourdata visualisation.

– To raise user awareness of when they are undergoingnoisy activities.

– Design a system that can be installed at home andworks out of the box.

The prototype of our IoT system is heavy on interaction design,we envisioned a system the user would be able to manage andconfigure for it to work in as unsupervised as possible way.Together, the IoT nature of our technological intervention andthe subsequent evaluation in the wild of its prototype wastaken into account during the design of the interaction.

First we started defining the dimensions of our problem de-rived from the elicitation of user requirements:

• subjective perception of what is noise: How we can sup-port users to define which amount of sound intensity theyindividually consider to be noise?

• reflection on personal noise generated: How we design oursoftware to integrate the noise the data visualisations fromPart I?

• environmental obstacles of the setting hosting the technicalintervention: Is it possible to guide users through a hardwareinstallation at home that is heavily dependent on wirelesscommunications?

• privacy concerns: How do we empower users to decide whothey want to share their data with?

• hardware to support or data visualisation: Do we needsome extra hardware to support the system?

We designed a web application for tablet devices to addressall of the above.

Since the first stages of the ideation of the project,tablet de-vices were regarded as being most suitable to visualise the datasensed. There were tablets available from other studies, theyare fit for purpose, and mostly everyone is familiar nowadayswith touch interfaces. The number of tablets needed for thestudy were argued to be one per participant at a later date, thiswas due to address privacy concerns about sharing the dataand to simplify the access to personal data. The possibilitythis could be a system users could borrow from some author-ity/friends and install for an undetermined period of time tolearn about DIY noise was suggested, and the fact that our IoTsystem works in any modern browser, motivated us to designfor one tablet per user.

Web ImplementationThe evaluation of ubiquitous technology can be aided withweb app technologies to display the data sensed in the IoTnetwork [13, 12]. This leverages the need of building displaysspecifically designed for each study and increases the chancesfor successful user adoption [44]. Reuse of code is generallyaccepted and encouraged on every formal approach to soft-ware engineering. Thus, it was logical to reuse and adapt theback-end code previously used in the Temperature Calendarproject to give support to the data persistence layer for thisproject. This layer is built on Django, a popular web-basedframework for ubiquitous technologies, which allows develop-ers to operate on a typical SQL database with a considerablelevel of abstraction, and which facilitates interoperability withthe use of REST APIs.

Our development server is running on a Raspberry pi, whichalso takes care of receiving the data via radio from the sensor

nodes by running a Python process in the background. Saidprocess directs sensor data to the development server throughHTTP requests. Our web application connects to the develop-ment server through the local network and the data retrievedis displayed to the user as a normal website. Setting up localenvironments is the most popular way of developing web tech-nologies, it is an enclosed and controlled space which mimicshow the application will behave when deployed on the internetwhen it is ready for production.

Although software development always finds its way to fulfila goal, being well versed in how software supports the under-lying technology of this project helped greatly in shaping theinteraction design of our prototype. We think it is importantto remark this fact as a way to encourage HCI practitionersand UX designers to have in mind software principles andtechnological barriers when prototyping and informing theirdesigns.

To understand better the dimensions of the problem wesketched some scenarios depicting situations based on userpast experiences relating to them complaining (or receivingcomplains) about DIY noise. The user journey in Figure 1describes the process of installing and setting up our systemwhen it was decided enough for taking into the field study; itis the result of several refinements of the original idea and theuse cases described during the design of the user interaction.

The next logical step was to give some thought to the soft-ware which would guide the interaction with the system. Onthe next section we document the iterative design process ofprototyping and refining the software artifacts.

PrototypingDesigns emerge iteratively from design-evaluation-redesigncycles [43]. We started sketching on paper some wireframesfor the navigation of the app, at first we considered thingslike a login screen with a keypad to introduce a personal pinto log into the app so having only one tablet per householdwere an option; having participants to remember pin codeswas eliminated from the design later on. Other design featurespresent on our early prototypes did not make the cut either;things like setting up a user’s noise threshold tolerance usinga semantic form, or being able to see the data available fromother members of the household by default were abandonedduring our expert review sessions.

After a couple of expert reviews on prototypes sketched onpaper, we proceeded to refine our prototype using Adobe XD,a professional tool for creating wireframes; this prototypingtool allow for easy sharing of designs across stakeholders forreviewing. Some design requirements were introduced laterin the prototyping process as the functionalities were beingfurther specified.

Some key design decisions/features we decided to implementon the final prototype took form during the prototyping stage;in the end we designed a set of steps that guide the installationof the system. Following, we summarise how the user isguided through the setting up of the IoT system:

Figure 13. Synchronise sensor screens.

• Synchronise sensor: When it is detected a new user is inthe system, the application instructs the user to connect thesensor to the current and to touch the button on the sensornode. This task starts the synchronising process which willassociate a user with a sensor (Figure 13).

• Name the sensor: The user names the sensor so it can beidentified in the network (Figure 14).

Figure 14. Give a name to the sensor screen.

• Place the sensor: To ensure an adequate communicationbetween the sensor nodes and the main station, the user issuggested to find a place for the sensor node to rest withintheir room while looking at the indicator so the signal isstrong enough. This was achieved because RFM69HCWchips are able to determine the signal strength when theyreceive a packet from some node (Figure 15).

Figure 15. Place sensor screen.

• Establishing noise threshold/Make some noise: As we saidbefore the perception of noise is subjective. This step wasdesigned to acknowledge the personal noise threshold. Theuser is invited to make some noise in the room, the kind

of noise they think it would not be acceptable to make ina room of the household by any of the members. Whilethe sensor is calculating the amount of noise, a noise metergets filled and the number of decibels read. Now the systemknows that when a sound above that level of dB is detectedin any of the other rooms it could be potentially perceivedas noise by this user (Figure 16).

Figure 16. Set the noise threshold.

• Set time schedule: This step requires the user of filling aform with an approximation of the times when they are athome, go to sleep, and go to work; as well as their workingdays. As the user is filling the form a timeline matchingtheir input appears at the top of the screen (Figure 17). Thisinformation is used to calculate the overall percentage ofnoise registered that can be seen on the pie chart in theSummary View.

Figure 17. Set time schedule screen.

• Main Screen: This is the welcome screen when the uservisits the web application each time, from here users canvisit both Summary and Yesterday Views, as well as set on-off their privacy settings. Personal noise data starts beingdisplayed after a full day of data has been registered, In theevent there is not data available (e. g. right after having setup a new sensor) the screen prompts the user when the firstreport on noise will be available to be visualised (Figure18).

• Yesterday View Screen: This is the screen which displaysthe detailed noise data and the annotations from yesterday(Figure 19).

• Summary View Screen: Here users can visualise noise datafrom previous days simplified in the novel visualisation.At the right of this screen the percentage of noise can be

Figure 18. Main screen.

Figure 19. Yesterday View screen.

plotted in the pie chart against filters of time at home, timesleeping, and time at work (Figure 20).

Figure 20. Summary View screen.

• Others sensors: Providing other members of the householdhave chosen to share their personal data, users can accessother member’s both Summary and Yesterday Views (Figure21).

The last bit about the interaction design we want to talk aboutrelates to the red led diode built in the sensor node enclosure,since this was a requirement extracted during one of the brain-storming sessions with the experts, supported by one initialfinding about people feeling embarrassed when asked to makeless noise As sensor nodes are continuing measuring soundpressure and after sending the reading to the base station, itwas easy to query the database and ask if the value receivedwas well above the minimum noise tolerance threshold setby any of the members of the house and report back to thenode. Therefore, we thought of adding a red indicator to flash

Figure 21. Other Sensors screen.

for a period of 5 seconds to raise awareness of noise on theperpetrator.

UsabilityUsability is core theory in HCI, previous research on usabilityand IoT systems have provided evidence that sustained engage-ment can be seriously affected when users have to deal withdevices with usability problems [4]. To ensure our design wasusable before progressing the prototype into the high fidelitystage we underwent a series of usability tests in our mediumfidelity prototypes taking advantage of our expert knowledgeof users and technology.

Jakob Nielsen’s 10 heuristic rules for evaluation [38] werecompared against our medium fidelity prototype. We askedtwo colleagues to inspect our interface individually; after com-paring notes and aggregating the findings, we made someadjustments and refinements in the product:

• Visibility of system status: On some infrequent but major ac-tions such as the configuration of the sensor nodes, we madethe response from the system dependent on indicators likethe dB metre or the signal strength widget. Other decisionwas to add an icon to indicate the sensor node is currentlylistening but time constraints impede its implementation.

• Match between system and real world: We reviewed all thetext displayed to make sure the language used is clear andthe users are familiar with it.

• User control and freedom: It was decided the configurationprocess to be interrupted at any time, if not completed theapp would alert the user the configuration was not com-pleted.

• Consistency and standards: We moved around all the but-tons with similar behaviour to the same position acrossscreens.

• Error prevention: We added a time bar representing thetime schedule set by the user so they could visually see ifany overlapping between time periods was made by mistake(e. g. time at work and time at home overlapping one hour).

• Recognition rather than recall: We made sure instructionswere always visible in each step of the configuration so norecall from previous steps was needed. In the SummaryView we also make available the data simplified for yes-terday. We also added labels to each of the rings on the

Figure 22. Refinements in the web application after running usabilitytests.

visualisation to aid the identification of the day each ringsrefers to.

• Flexibility and efficiency of use: Quick access from/to yes-terday and Summary Views were added to avoid goingthrough the home screen, we assessed experts in our appli-cation would greatly benefit from these shortcuts.

• Aesthetic and minimalist design: We avoided excessive clut-tering across the webapp and removed buttons for infrequentactions from the home screen.

• Help users recognize, diagnose, and recover from errors:On this front we did not have the time to implement somerecommendations we suggested like designing custom NotFound 404 error screens, or custom screens apologisingfor just being a prototype and encouraging to contact theresearchers.

• Help and documentation: We reshaped the synchronisingprocess in the form of a wizard widget and added informa-tion bubbles on places like next to the button for sharingpersonal data.

Finally, we isolated the configuration process into tasks andlooked for usability problems to ensure the actions visiblewere sufficient, correct and easily associated to the expectedoutcomes, thus evaluation our design for ease of learning [43].On Figure 22 the reader will observe how some wireframeschanged after our heuristic evaluation.

High-Fidelity PrototypeSome HCI studies, given their nature, cannot be evaluatedwith users during the early stages of design prototyping. Someinvolve the invention of a new technology, in other cases thetechnology needs to be used for an extended period of time andit needs to work before being evaluated. IoT studies falls in thelatter category, these systems call for evaluations of prototypesin the wild where they will be really used and integrated intopeople’s lives [9].

We progressed from the medium fidelity wireframes built us-ing Adobe XD towards a high fidelity prototype. This softwareis still in its infancy and does not offer the possibility of auto-matically generating HTML and CSS code equivalent to whathas been designed with the application; although we see poten-tial in this tool as it is very easy to use and results are visuallyimpressive, the lack of the mentioned feature will surely slow

down the adoption rate in the future. This hindered the processof high fidelity prototyping, instead of moving our designs intoa more capable software like Axure, we opted for loading theraw SVG files exported with the Adobe software into HTMLdocuments and carefully manipulate the domain elements andadd the relevant Javascript code to support the execution flowof the interaction design. We thought the minimum viablehigh fidelity prototype would have to support all the configu-ration process, and the display of the visualisations. We usedpopular techniques in prototyping like image maps to mimicbuttons and other clickable areas. On an eventual final productthis would not be acceptable as it would result in a systemvery difficult to maintain; time constraints lead to leave asidefeatures like dynamic animations while the system is doingsome background processing, features that users surely wouldmiss as they are considered pretty much standards nowadays.

PART III - EVALUATION

Study DesignThe evaluation of IoT systems has to be thought carefully;some of these are systems that try to persuade users to look atreal data collected for a period of time. Researchers will wantto observe their intervention in a natural context of use [47].Important evaluation results like user engagement with thesystem, the effectiveness of the visualisation, or the usabilityproblems derived from the particular setting where the IoTsystem is deployed cannot be obtained from lab results. This iswhy IoT systems call for field study so researchers can obtain amore detailed picture of the performance of their technologicalintervention.

During in-situ studies, the researcher has little or no controlover the environment in which the technology is working. Itis difficult to ascertain before the study what effect derivedfrom the use of the device could be of interest to the researcher[43]. There is a trade-off between preserving the natural en-vironment and the quality of data collected [22]. To obtaininsight during an evaluation of a display we would want usersto check our displays daily, for example; we should be carefulabout the impact of reminding participants daily has on theexperience as we might be missing why the display is notengaging.

As noted by Rogers et. al. [43], prototyping in the wild wheredifferent kinds of technologies are assembled in the lab andthen tried out in the real settings is on the rise, but there areproblems translating HCI theories into practice so they argueabout letting interaction design in the wild continue as anapplied practice.

Previous to the start of the in-situ study we conducted a pilotstudy at the researcher’s house to ensure the system worked asplanned once taken to the wild; when conducting this kind ofstudies, frequently it is difficult to anticipate how the eventsare going to unfold [43], so we wanted to be as much preparedas possible in case of some unforeseen eventuality happened.This helped greatly with designing the semi-structured exitinterviews at the end of the study, a script to follow whenthe researcher visited the real sites, and a bullet-point list ofinteraction stages for the researcher to write down particular

usability issues. Also, from a strictly technical perspective, afew steps are required to enable the system in its prototypestate outside the lab, the pilot study helped predicting possibletechnical issues.

The set of questions for the interview touched on topics abouttheir thoughts about the configuration process (Figure 23),their cognitive processing of the visualisations, and the infor-mation learned from the noise data.

ParticipantsParticipants were recruited from personal connections; theywere chosen on the eligibility criteria to be adults living inrented or owned shared accommodation with wireless con-nection to the internet, and to have a room of their own notshared with anyone else. A total of 5 participants were selected(professionals aged between 30 and 41), 3 of them living ina 3-bedroom flat and the other 2 in a 2-bedroom flat, bothflats sited in East London. All of them were familiar with theuse of tablet devices and defined themselves as comfortablewith modern technologies, none of them work in the HCI field.Participants were not paid for their time and participated in thestudy as a gesture of goodwill; they all signed consent formsoutlining the study details.

ProcedureThe study was scheduled to run for two consecutive weeks, inthe first week the system was in use at the 2-bedroom flat, andwe later moved the system into the 3-bedroom flat for anotherweek. The study could not run in parallel because we did nothave the time to fabricate enough sensor nodes and we onlyhad 3 available ready to install. Time constraints also impeded5 other eligible users to participate in the study. Participantswere given each a sensor node and an Android 4.4.1 tabletrunning Google Chrome browser, plus all the necessary wiresto power the devices. A direct access icon to the webapp wasadded to the icon area, only this icon was visible. The basestation was installed in a common area which, in both cases,was the living room and the sensor nodes were placed in therooms.

Before the study participants were sent an online form withquestions about their experiences with noise in the past. Allparticipants claimed to have been bothered about someoneelse’s noise during the past year, 4 out 5 complained directlyto the offender; also, all the participants reported to havereceived complaints about them making noise at any point oftheir lives and to have put a fix immediately after receiving thecomplaint.

The researcher agreed with participants in the same housea suitable time for the study to start when all of them werepresent. Once at the house, the researcher talked about thestudy and the activities participants were going to be asked toperform, all the hardware was shown and described includinginstructions for their use (Figure 1). A participant was giventhe base station and instructed them to find a place for it in acommon area, plug it to a wall socket and connect it to the in-ternet. Due to this being a prototype, and a web server runningon local, the researcher had to note down the local IP addressthe WiFi router had assigned to the Raspberry Pi to ensure the

Figure 23. Interaction flow diagram for the user interface. Main application and Configuration Process

tablets were able to connect to the web application when con-nected to the network. In individual turns, the researcher gaveeach participant a tablet and instructed them to visit the iconwhich started the configuration process of the system. Fromhere, the researcher let the participant follow the instructionson the screen only intervening when the participant requestedfor some help. Notes about participant remarks and someothers based on the researcher’s observations were taken sys-tematically. Participants were asked to check the applicationand their personal noise data once daily when the first reportwere available.

After all participants had gone through the process, the re-searcher left the house for a week only coming back onceduring each of the studies because of technical difficulties.During the first study, two of the tablets stopped working andlost all communication with the router when connected to thesame network at the same time, we opted for not debuggingthe problem and replace the tablets for newer models so thestudy could progress. Three days into the second study thebase station got disconnected from mains by the landlord bymistake, the researcher was alerted by the participants that theapp stopped working and visited the house to fix the issue; asa result, we lost noise data for half a day.

interview questionLam et al. summarised in seven the possible scenarios for eval-uation in information visualisation [33]. This help us greatlyformulating our evaluation goals and designing a set of ques-tions for our method of evaluation. The scenario which betterdescribes our study is Evaluating Visual Data Analysis andReasoning (VDAR), Lam et al. defined VDAR evaluation as”how a visualization tool supports the generation of actionableand relevant knowledge in a domain”. it relates more to how awhole visualisation supports user’s analytic processes. VDARis often associated with in the wild studies as the data andthe actionable knowledge derived from the visualisation arehighly context-sensitive. They propose a set of four questionthe evaluation of the data visualisation need to give responseto:

• Data exploration? Are our participants able of seeking,extracting, reading and searching information easily?

• Knowledge discovery? How effective are our visualisationsin supporting participants on acquiring knowledge from thedata?

• Hypothesis generation? Are our visualisations competentin giving participants the tools to build knowledge?

• Decision making? How well they support the communica-tion of analysis results?

ResultsInterview ResultsAround 100 minutes of recordings were systematically tran-scribed and analysed by the researcher. The researcher carriedout a top-down approach to thematic analysis so as to keep theevaluation focused on the research question [6]. The analysisconcentrated on identifying finer patterns within the commonthemes that were relevant to the study purpose. We structuredour qualitative results in five main themes which correspond toparticipant’s use and understanding of our three visualisationsabout noise data (Yesterday View, Summary View, and OverallChart), and other two overarching themes about their opinionson personal noise threshold, plus overall engagement with thesystem.

Yesterday ViewData Exploration: All participants easily found informationwith this visualisation; when asked about the current snapshotof the data they all were able to navigate through the data andreport accurately on what they were looking at. Participantswere able to identify noise occurrences at first glance.

When the peaks go red between 6 and 9, and then 8and 9... I don’t remember what was happening at thatpoint, initially waking up and chatting, and this bit, yes!exercise in the morning (P1)

A participant remembered being able at some point of thestudy of reading contradictory noise information that lastedfor a while

There was some noise when I was not in the flat. Therewere high peaks in red when no one was in the flat (...) Itwas for a few minutes I think, it was not very long. But itcould be something from outside. (P2)

Other participant reports on how she extracts informationderived from the red area of the graph.

It’s more interesting to see yesterday because you canassimilate the red parts with an activity you have inmind.(P5)

Participant 3 suggests new features so the search could befiltered with a bigger level of granularity than the defaulted.

I’d like to have more breakdown in the hours of the graph,if I have to be picky. Of course I know I was sleepingfrom 3am to 5am. (P3)

Knowledge discovery: The graph does well in flagging userswith noise levels that are quickly acknowledged by partici-pants.

I looked at it, and see wow there’s a lot of noise at thattime and I realised the time and I could remember exactlywhat was happening at the time. (P1)

The detailed day view gives noise information in a way thatallow users to explain it. This participant realises that the noisereported on the graph must not be his own.

See! Most of the noise was made in the morning and8:45 and, through the day, it was basically noise from thestreet because it’s all blue. (P4)

This participant discovers the consistency of the noise reportwith his daily activities.

Well, there’s a peak at the beginning. I am surprised butI’m usually very quiet but it’s true, it is the alarm, soevery morning it is around the same time. (P3)

In this case, although the participant does not link the noise tohimself, he gets reassurance that something noticeable loudhappened.

But it had to be very loud to make it into the red decibelarea of the graph. I reckon it must have been somethingquite loud. (P2)

The last participant understood that checking the visualisa-tion with data not from a workday will be more informativeregarding her personal noise.

I know what I’m doing in the morning because everydayyou do the same thing, but it would be more interestingto see what happens on the weekend because you havemore erratic schedule. (P5)

Hypothesis generation: Annotations encapsulating periodsof noise are referred by this participant to be necessary for fullunderstanding of the data.

Yeah I guess so, well it shows you at the peak of thenoise and it shows you how long that sound lasted for (...)without the times it would not make sense whatsoever.(P1)

These two participants also help themselves with the annota-tions to infer high noise is the result of them getting ready inthe morning.

I don’t have only one alarm clock, I have several, somaybe that’s why it’s showing the 2 minutes label, be-cause it takes a while to wake me up.(P3)

I made some noise around 8:40 for 3 minutes, and I thinkthis has been recurrent every single day. When I wasdrying my hair. (P4)

This participant, however, could not point easily towards whatdefines something to be classified as noise by the system, animportant concept to completely articulate our technologicalintervention.

I find it informative but it was hard to know what was acertain length of decibels. What was the certain amountof decibels?. So, for example, if it’s red how loud thatactually is? Does it make sense? It could be red whenyou drop something on the floor and it makes a noise, orwhat is the mark? the setup when you scream? (P2)

Decision making: All participants mentioned that it was easyto remember events when looking at this visualisation, andthe fact that it displays recent data (24 hours) makes the in-formation readily accessible and reportable. Colour was forall participants the visual feature with most information value.Two participants mentioned not understanding the figure indB in the annotations as it had not added information valuefor them compared to the other visual features.

The colors, it makes it very clear; I don’t even know ifyou need the decibels, well maybe but for me that youhave both the colors and the times. The threshold doesnot need to show because, after setting the threshold inthe configuration, it’s kind of almost unnecessary extrainformation, that’s my main critic you don’t need to fillall the information, you need to simplify a bit (P1)

yes, but more than a number, it’s the graphic itself, num-bers are fine but comparing each peak and also the colourhelped me more rather than reading a number to be hon-est. (P3)

Figure 24. Screenshot - Summary View displaying real noise data

Summary ViewThree participants were confused about this visualisation.They did not engage with the visualisation at all and reportednot understanding it completely to be able to extract meaningfrom it. The concept of data for the past 7 days displayed as

a 24-hours clock with labels representing midnight and mid-day was not completely understood. The main problem theyreported was that the visualisation did not look like a clock inthat it lacks labels for the hours other than 00:00 and 12:00(See Figure 24).

No,because I don’t fully undertand that one if I’m honest,it looks like a really funky graphic but I’m not entirelysure how I would interpret the data there. I understandthat the colors are the same of the colours of the graph,so green periods match when there was not noise, andblue is when is only regular, that’s all (P1)

I find that view quite confusing, because I could see itwas the days but, to me, how do you know what it was atwhat time? To me it was just kind of a messy pie chart,it was just a circle with lots of blue, green, and reds bits.The only information I’d get from that... Say if it was allred it would tell me I was very noisy, but it would notallow me to pinpoint you anything, because that’s morewith the day view (P2)

I feel that all my sound is consistent in time, that all mynoise aligns here which is morning I think, no it’s themorning, I’m not sure, it’s this night? (...) because thiscould be midnight or midday or the other way, it’s say 12and 00, but I forgot this is like a clock (P5)

Data Exploration: Two participants were able to look at thedata and be able to navigate it to try to make sense of it.

Yes, I can see where the red peaks are concentrated, all ofthem are like in the same area, and also you can comparebetween rings. (P2)

Most of the noise is made in the evening, at night 10o’clock, no noise at night because here is almost empty.(P4)

Knowledge Discovery: The same two participants built animage of the times of the day when their noise data appearedin red more frequently during the past week.

So anytime from 9 to midnight it is when I’m most an-noying, around 9am I go to the toilet I guess. (P4)

It was interesting to know the noise I made which is con-sidered loud during the day and I know that I’m mostlyloud in the evening and in the morning (P2)

Hypothesis Generation: The same participants were able tosee patterns in the data that would lead them to see a biggerpicture about his personal noise.

I’d say my room is quite noisy during the day. (P3)

...some (noise) in the morning, and the rest of the day itis basically noise from the street because my bed is nextto the window and the sensor too. (P4)

Decision making: Participant 4 made an interesting obser-vation about the Summary View. As the rings, representingpast days, are becoming smaller and smaller the cluttering inthe data is bigger and there’s a relevant occlusion effect thatprevents the data from being correctly evaluated. This could

hinder the time users need to make sense of the visualisationas it impacts the readability.

To be honest, this is confusing, it’s a little bit confusing Iwould say. I cannot see well the inners circles and it wasquite long ago so I cannot remember anything there. (P4)

Overall ChartThe pie chart representing the individual overall percentage ofnoise of participants was very well received. It was useful forparticipants to assess with a fair amount of conviction whetherthey were noisy individuals or not.

Data Exploration: Participants found the graph intuitive asthe pie chart is a visual representation they were already fa-miliar with. Neither of them had problems navigating throughthe different filters time sleeping, time at work, and time athome. Also, maintaining the same colour scheme across vi-sualisations helped greatly to discriminate between low noise,medium noise and noise.

I can see the 3 times put together, the percentage in agraph that problably interests more, those three buttons,I can select which times I want to see, if I’m at work, ifI’m at home or if I’m sleeping. (P3)

Very easy to understand, pie charts are very simple toread. 7% of day when there was too much noise, then theblue is when you are acceptable levels, and green whenyou are not in the flat at all. (P1)

I thought that was really informative, the pie chart wasreally interesting. Because you can see how noisy youare overall. If you have 2% red, you know, you’d thinkOK I’m a little bit too noisy but it’s only a 2%, so it’s nottoo bad, you know. Most of it was green. (P2)

Knowledge Discovery: As simple as a pie chart can be, thisbecomes full of meaning for participants. They were surprisedchecking the overall accumulated at the end of the study.

I’m very well behave flatmate, because for me 3% ofnoise is totally cool. (P1)

It shows that I’m very quite, I don’t disturb my neigh-bours. (P4)

That’s only at home right? most of the time I’m okbecause it’s blue, but 5% of the time it is red so that’sbad. and that’s it (P3)

Hipothesis Generation The Pie chart reinforced participants’preconceived ideas about themselves when it comes to per-sonal noise.

It’s tiny, I’m a very quiet person. I’ve been told thatbefore, it does not come new. I’m very self containedunless I’m drunk (P4)

I know for sure I’m very quiet but I thought somehow Iwould be noisier, meaning I don’t know sometimes youare watching something (TV)... but no no, it’s good if itsays I’m quiet it’s because I am (P3)

I would not say that because when you check at home itseems there’s a lot of noise but I cannot be noisy whenI’m asleep. It’s because I sleep with the window open(P5)

This last participant tries to build theory about their noiseresults as they saw almost no green colour on the chart forwhen the time they are at home.

I think that affects, because when I close the windowyou cannot hear anything and I always leave the windowopen. Also, I like to use my hairdryer to keep me warm,or maybe there’s something blowing into the room sothat impacts. (P5)

Sense-making process gets enhanced when using insights fromthe Summary View and the pie chart.

It’s only 5% of the time, OK I’m not green but I’m ahuman being, it’s acceptable so 95% of the time my noiseis acceptable so I would not define myself as a noisyperson I’d say I’m noisy at certain times like when Iprepare myself, also in the mornings when you are in arush and stuff like that. (P5)

Decision Making: This participant, who engaged the mostwith the system, used our IoT system in a way we did notanticipate during the design of the prototype. Sleep quality isimportant to them and they tried to make sense of their snoringat night by comparing the percentage of noise on the blue zonewith the summary view.

But specially, from the two charts, I was really interestedabout the pie chart. Because it’s true that I don’t necessaryneed to know at what time I was snoring. So that’s fine.But it really helps me to understand how much noise I do,or the noise in the room during the night. For exampleif I’m sleeping, let say. . . I believe it was from 10 to 6,something like that. So the pie chart tells me that I’mnoisy 25% or 20% of the night that means that, well,I’ve been snoring or there’s been noise for two hours oralmost two hours. So, that’s something that I did learnand I was also interested and looking forward to knowthis percentage, honestly. (P3)

Another participant pictured themselves in a situation fromthe past having the system, and how this visualisation couldhave changed the course of the events.

I used to live in a flat in South London and I had a neigh-bour upstairs who was a council tenant with an alcoholaddiction, and he used to make very very intense levelsof noise at random times, sometimes 2am or 6am, (. . . )Sometimes, we had to call the police from LewishamCouncil and they had an after-hours service, and by thetime they came it was the next day, so there was not pointin investigating it because there was no evidence of it.So, in my opinion, this device would be perfect becauseI would have been able to proof that level of noise wastaken place above me, that would have really helped inthat situation because (. . . ) we did not have enough evi-dence that she was being a very antisocial neighbour,so

something like this would have been very useful in thatsituation. (P1)

Overall engagementEngagement with the system is a mixed bag of results. Threeparticipants reported checking the application at least oncea day, with one of them looking at data several times duringthe day and sometimes right after midnight when they knewthe next batch of data was already available. The generalconsensus was checking the application in the evening. Theother two participants agreed they did not check the applicationevery day and gave reasons like being very busy with work orsocial life. One participant forgot the system was installed inthe house, until the researcher visited the property for a secondtime, so our IoT system remained unnoticed sometimes, evenwhen this participant placed the sensor next to their bed site; allparticipants said the sensor node was not intrusive so we canargue overlooking the system hinders engagement. The otherparticipant who admitted not engaging much mentioned theywould have liked to be notified by the system when new datawas available or directly receiving the results on her phone.

I just forgot, or I think like it was the weekend, or youare not here at all. I think the fact that you have togo and check yourself and it’s not self-reporting is anissue we can improve, because it could be more efficientlike having a summary of the day popping up like anotification saying ”you can download your summary”,instead of you going to the app. (P5)

Noise TresholdOnly two participants reported noticing the red LED on thesensor node blinking at some point, both living at the sameflat; the feature might not have worked during the second fieldstudy. One participant reported not to know the meaning ofthe flashing red while the other did not stop making noise asthey were home alone at the time. When asked their opinionsabout the noise threshold their flatmates set for the study, threeparticipants judged the threshold to be fair but, curiously, thetwo participants from both households who were known to bethe noisiest argued the threshold was set to a low level.

I thought it was a bit too much, cause I did not really makeexcessive noise at any point, so if that was something thatwas installed in my house really, and it was set by myneighbour’s threshold I think I will get really annoyedbecause I just think that was unreasonable level of noisethat was expected to be my top level of noise. (P1)

I think it’s fine, but the fact that they are in the greenmost of the time I guess for me it is too low because ifwe consider the green being the normal, like definitelyfor me the threshold would be too low but then I guess itall depends on the person. (P5)

Usability problems from the researcher’s observationsThe main usability problems of our technology were identifiedafter analysing the researcher’s notes that were systematicallytaken while participants utilised the system. The researchersobserved the participant’s activities and made specific notesfor each of the main stages of the configuration process (right

hand side of Figure 23), and some more general ones aboutthe observed reactions to the experience.

We divide the configuration process being evaluated as fol-lows:

• Stage 1: start configuration sensor node.

• Stage 2: place the sensor and monitor signal strength.

• Stage 3: make some noise to set noise threshold tolerance.

• Stage 4: Set personal time schedule.

• Stage 5: Reviewing home screen.

None of the participants were able to follow the configura-tion process by themselves without the help of the researcherat some point. None of them interrupted the configurationprocess half-way through it, by clicking the close button forexample, and they all carried on until the Setup Completescreen.

Stage 1: All but one questioned if the application was workingcorrectly because the icon on the middle of the screen wasstatic. Two participants had to press the button on the sensornode repeatedly until the yellow led light started blinking.None of them had problems giving the sensor a name.

Stage 2: All participants were able to fix the sensor node tothe wall using the Velcro strips attached to the device, 4 chosean area next to their beds and one next to their desk. 4 partici-pants placed the sensor before looking at the signal strengthindicator in the tablet, and 1 of them changed the sensor toa different place after seeing only 2 out of 5 dots were filled.All participants asked the researcher if the signal strength wasgood enough, even when the indicator was showing 4 out of 5dots filled, signal strength never reached 5 full dots during theevaluation.

Stage 3: 4 participants shouted at the microphone and lookedat the application straightaway to check if there was someupdate on the screen, 1 participant chose to play some musicand turned the volume of the speakers up/down while lookingat the tablet screen to search for any changes. All of themwondered why the system was taking so long to refresh thevolume metre and chose to progress to the next step settingthe noise threshold to 70dB, only 2 participants were able tosee the 110dB screen at some point.

Stage 4: All participants quickly grasped how the input fieldswere populated and what information they were being re-quested. All participants asked about how they could tellthe application when they were on holidays or not at homebecause they were out with friends, for example.

Stage 5: Once the configuration is finished the web applicationgoes back to the main screen. None of the participants noticedthe widget for toggling on/off if they wanted to share theirnoise data with other members of the household, even thoughthey were told before they started setting everything up theywould be able to do so. The researcher asked them what wastheir preference and reminded them they could do it from thehome screen; once alerted, they all were able to scan the home

screen and choose a preference with any major problem. Allparticipants but 1 chose to share their data at this point; bythe end of the one-week field study all the participants hadtoggled this feature on.

The main usability problems found and those which we shouldfix first for the next refinement were:

• Share personal data: We thought of this feature as an im-portant thing users need to at least see and decide about, butthey overlooked it. The feature is defaulted to not sharing,but including this feature as an extra step before the end ofthe configuration is the way to go.

• Other: Two things that escaped our tests, there is no way toknow if four dots is good enough reception for the system,it is and so are 3 and sometimes 2, we thought the mapmodel of a signal strength on mobile was enough for themto know this. And the logical order is: first look at the signalstrength before placing the sensor, not the other way around.We suggest here to add a label next to the signal strengthwidget so the user knows what is considered a good enoughsignal strength.

DISCUSSIONThe results of the evaluation of our IoT system highlightedits strengths and weaknesses. Two of our three visualisationsdemonstrated competency in participants identifying in timenoisy activities and forming an informed opinion on whetherthey were noisy individuals. The summary visualisation, de-signed to identify patterns in noisy behaviours across time,threw less optimistic results to the point of not engaging with3 out of 5 participants. Also, both interviews and usabilityevaluation demonstrated there is room for improvement inour design. Based on this knowledge, we continue with thediscussion of the results derived from the thematic analysis ofthe interviews and the observations of the researcher.

Usability in the wildWe identified several usability problems during our field studythat we could not anticipate when running our usability testswithout users. Somehow, this is compressible having into ac-count it was the first time we evaluated the application withusers and during a field study. Some practitioners pointedthat testing in the lab brings up most usability problems whenbudget and deadlines constrain the study [30]. However, ourinteraction is heavily dependent on the context- i.e. users’ in-dividual homes- we are likely to find more usability problemsduring a field study as participants behave differently [15],although the evaluation may take longer.

Interacting in high fidelityOur design called for a high fidelity prototype and the aesthet-ics of the interface resembled a fully operational product. Limet al [36] observed, when comparing usability evaluations withhigh- and low-fidelity, that participants did not interpreted cor-rectly graphical elements of the interface in refined prototypesbecause they did not match modern GUI conventions. In ourstudy, although participants were told they were interactingwith a prototype they complained frequently during the config-uration of the system about graphical elements that appeared

static on screen (sync icon on Figure 13 and dB meter in Fig-ure 16). We included these graphical elements because weknew we had to convey to the user the system was running atask in the background for their benefit, but the prototype didnot reach that far. Our recommendation here is to be carefulwith mismatches between the high-fidelity prototype and theuser’s mental model of a finished product.

Looking for patternsThere are risks associated with innovation, and the evaluationof our Summary View graph flagged severe usability prob-lems which demanded too much cognitive load from threeparticipants. The task of recognising patterns in noise datawas hampered by two factors: excessive cluttering on the innerrings, and an incorrect use of metaphors. Clutter can obscurethe structure of the data and make difficult to discover relation-ships and patterns [40], each of the rings represent a day andplot all the noise data for a day (around 1000 points of dataafter applying a peak function) which did not display nicely inthe tablet devices for the rings with smaller radius. The secondproblem was that we did not a good job making prominentthe 7-days noise data was displayed in the form of a 24-hoursclock and participants were confused, we believe the use ofmore labels to show the hours of the dial would have helped.On a more positive note, 2 participants were able to use thevisualisation effectively which tell us that we took some stepsin the right direction.

The data is readyPerera et al. [41] highlight event detection as one of the mostimportant features of IoT and talk about notifications as away for the IoT to fulfill its vision of machine-to-machineand machine-to-human communication. 2 of our participantsadmitted they were not engaged by the technology becausethat week was a busy period or they just simply forgot thesystem was installed. Sending notifications to their devicesto remind them that a new daily report was suggested. It’simportant for practicioners to have in mind the effects notifi-cations and interruptions have in user performance [37, 7, 5]before considering sending reminders as a solution so theirIoT solution does not go unnoticed.

How did I sleep?A participant surprised us by using all the visualisations to”evaluate” their sleeping patterns; they ascribed more meaningto elements that we thought, at first, were less relevant in thedata, the blue and green coloured areas, which we did not eventhink of annotating. We did not anticipate this use for ourtechnology and it was not within our set of user requirements.In-the-wild studies have demonstrated in the past how peo-ple approach technology differently depending on their ownsituations and purposes [43]. This demonstrates three things:

• Technology is a tool and users decide how to use it.

• Users are extremely adaptable.

• Sound data can reveal a lot of things about ourselves.

LimitationsAll the results presented in this section come alongside a seriesof limitations regarding the in the wild study, the evaluationmethods, and the technology deployed.

DeploymentOur IoT system was only deployed in two flats with two andthree inhabitants, it has not been escalated to a bigger flat ora house with more users yet. Our margins were narrow andwe lack the time and hardware necessary to deploy in a thirdhouse, but we believe a swatch of 20 participants would hadbeen more representative. Finally, each of the 2 field studiesonly lasted for one week, it would have been interesting to seethe effect of the field study in a longer run in the engagementof the technology. Chamberlain et al. [9] note there is a lackof consensus in HCI about the duration of field studies, assometimes it takes years to see the long-term effects of thetechnology.

Evaluation methodsOthers in the wild studies [12, 13] have used interaction logsto quantify user engagement with the technology, our dataabout user engagement surfaced only from the interviews sotriangulating between the two sources would have given us aclearer picture.

TechnologyAt this time our sensor nodes come without a battery attachedand need to be plugged to mains. This makes our IoT systemless autonomous, less ubiquitous, and more dependant to apower source. Besides, we did not give an answer on howto solve the eventual problem of deploying in a householdof bigger dimensions where we cannot cover all the roomswith only one Raspberry Pi working as a base station; theinteraction design of the system would need to be adjusted forthis case and further tested in the wild.

Future workFurther iterationsWe discovered usability problems in our design directly fromthe in the wild study, these need to be addressed before wecould think of the possibility of a second study. Also, we havealready start thinking about how we can fix the visualisationthat was not very popular amongst our participants and thatfixes all the usability problems of the old one. If we plot thesound data of seven days in a stacked form the data from oneweek ago would not be obscured and the eventual patternsarising in the data would be easier to identify; also, we cantake advantage of the similarities with the Yesterday Viewgraph, which was popular for participants (Figure 25).

Long-term effectsWe focused on users making sense of their DIY noise for thisstudy, researching on which effects on noisy behaviours thetechnology may have when in use for a considerable amountof time looks like a good opportunity and contribution in HCI.

Other culturesWe tested our technology only in London. Although it is acity well known for its diversity, it seems to us that taking ourtechnology to other cities where other policies regarding noise

Figure 25. A variation of the Summary View without the usability prob-lems found during the in-the-wild study

apply, or other cultures with a different approach to noise live,could be a good opportunity to test its robustness.

Involve authoritiesThose who make policies regarding noise could be includedas stakeholders in the project, that way we could take ourtechnology where the problem is more prominent and evaluateits impact.

Battery expenditureDirectly extracted from the technical limitations, this futurework would involve diving into the libraries already availablefor sleep/wake up SAMD21 microcontroller boards and mak-ing them work while still allowing for continuous sensing ofenvironmental audio.

Noise IdentificationWith our solution we require users to remember to which ac-tivities their noise data is linked. In the literature review wecovered other studies using audio recognition for IoT projects.The possibility of analysing sound frequency and of identify-ing noise sources, thus alleviating some effort by the users, isthere and could be explored.

CONCLUSIONSThis project started from a novel premise, from a ground whereHCI has not completely developed roots yet. We saw an oppor-tunity for technological intervention in domestic noise and inexploring how it affects people daily. Following the iterativedesign process described in this document, we progressed froma user requirements elicitation stage into the development of afunctional high-fidelity prototype, combining IoT technologyand sound data visualisations. We took our IoT system intothe wild and deployed it in two flats in London for a one-weekstudy. Our evaluation based on qualitative data revealed userseffectively made sense of the noise they force onto others by:identifying past events when looking at the noise generated bythese activities, forming an opinion of whether they are noisyindividuals or not by looking at the accumulated sound data,and, in some cases, identifying patterns in noise occurrenceswhen the historic data was plotted for comparison.

We faced two main challenges during the development of theproject. On one hand, the technical challenge of designing

and evaluating an IoT system in situ, capable of sensing audioat home; we solved the problem by building a radio networkof microphone-and-microcontroller sensor nodes designed forbeing installed and configured by non-experts in their ownhomes. On the other hand, we envisioned and implementedthree data visualisations that leverage users to derive meaningfrom the sound data generated by their personal activities. Thefinal prototype was completed with a set of tablet devices,where users can interact with a web application that displaysthe data visualisations and guides them through the setting upof the system.

Our evaluation unearthed usability problems with the system,derived both from the high-fidelity nature of the prototype andfrom the situated nature of our technology. Evaluating thesetechnologies in the wild should be the norm, because, as notedby Rogers et al. [43], the researcher has to make sense ofthe data where different factors and interdependencies are atplay and might be causing the observed effect. Resulting fromthis evaluation, we paved the way to tweak and improve thetechnology further than our prototype reached to. Also, wewere witnesses of how people appropriate technology on theirown terms which give us a lot of hope to keep researchingDIY noise effects and in the value of the data we worked with.

Finally, we suggested different directions HCI researchers cantake from here. Technically, the prototype is not perfect andit should be taken closer to the ”sweet spot”. Also, involvingother stakeholders, investigating how the technology behavesin the long run and in other places, and diving deeper intosound data. Lots of future work.

ACKNOLEDGEMENTSMassive thanks: To my supervisor Enrico Costanza becausehe knows what he’s doing all the time and, when you thinkyou are hitting a wall, he appears and simply opens the dooryou were not able to see. To Diana Nowacka because she’sshared her super intelligence with me for this project, andshe’s the best of the best at electronics, digital fabrication, andeverything really. To P1, P2, P3, P4, and P5 for understandingwith a smile that it was the system who was being put to thetest not them.

Gracias y <3: A mis padres, porque me han animado y con-sentido esta aventura que es el punto de inflexion en mi carreraprofesional. Son los mejores y los mas guapos.

REFERENCES1. The Decibel table (PDF). (????). http://www.philophony.com/sensprop/archacoustics/pics/decibel_scale.pdf

(Accessed on 09/11/2017).

2. Table chart sound pressure levels SPL (PDF).http://www.siue.edu/˜gengel/ece476WebStuff/SPL.pdf.(????). (Accessed on 09/11/2017).

3. Jimmy Azar, Hassan Abou Saleh, and Mohamad AdnanAl-Alaoui. 2007. Sound Visualization for the HearingImpaired. International Journal of EmergingTechnologies in Learning 2, 1 (2007).

4. Mara Balestrini, Tomas Diez, Paul Marshall, AlexGluhak, and Yvonne Rogers. 2015. IoT community

technologies: leaving users to their own devices ororchestration of engagement? EAI Endorsed Transactionson Internet of Things 1, 1 (2015).

5. Matthias Bohmer, Christian Lander, Sven Gehring,Duncan P Brumby, and Antonio Kruger. 2014.Interrupted by a phone call: exploring designs forlowering the impact of call notifications for smartphoneusers. In Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems. ACM,3045–3054.

6. Virginia Braun and Victoria Clarke. 2006. Using thematicanalysis in psychology. Qualitative research inpsychology 3, 2 (2006), 77–101.

7. Duncan P Brumby, Anna L Cox, Jonathan Back, andSandy JJ Gould. 2013. Recovering from an interruption:Investigating speed- accuracy trade-offs in taskresumption behavior. Journal of ExperimentalPsychology: Applied 19, 2 (2013), 95.

8. John V Carlis and Joseph A Konstan. 1998. Interactivevisualization of serial periodic data. In Proceedings of the11th annual ACM symposium on User interface softwareand technology. ACM, 29–38.

9. Alan Chamberlain, Andy Crabtree, Tom Rodden, MattJones, and Yvonne Rogers. 2012. Research in the wild:understanding’in the wild’approaches to design anddevelopment. In Proceedings of the Designing InteractiveSystems Conference. ACM, 795–796.

10. Jianfeng Chen, Alvin Harvey Kam, Jianmin Zhang, NingLiu, and Louis Shue. 2005. Bathroom activity monitoringbased on sound. In International Conference onPervasive Computing. Springer, 47–61.

11. William S Cleveland and Robert McGill. 1984. Graphicalperception: Theory, experimentation, and application tothe development of graphical methods. Journal of theAmerican statistical association 79, 387 (1984), 531–554.

12. Enrico Costanza, Ben Bedwell, Michael O Jewell, JamesColley, and Tom Rodden. 2016. ’A bit like BritishWeather, I suppose’: Design and Evaluation of theTemperature Calendar. In Proceedings of the 2016 CHIConference on Human Factors in Computing Systems.ACM, 4061–4072.

13. Enrico Costanza, Sarvapali D Ramchurn, and Nicholas RJennings. 2012. Understanding domestic energyconsumption through interactive visualisation: a fieldstudy. In Proceedings of the 2012 ACM Conference onUbiquitous Computing. ACM, 216–225.

14. Pan Du, Warren A Kibbe, and Simon M Lin. 2006.Improved peak detection in mass spectrum byincorporating continuous wavelet transform-based patternmatching. Bioinformatics 22, 17 (2006), 2059–2065.

15. Henry Been-Lirn Duh, Gerald CB Tan, and VivianHsueh-hua Chen. 2006. Usability evaluation for mobiledevice: a comparison of laboratory and field tests. InProceedings of the 8th conference on Human-computer

interaction with mobile devices and services. ACM,181–186.

16. Hoperf Electronics. 2017. RFM69HCW ISMTRANSCEIVER MODULE v1.1. Sparkfun.https://cdn.sparkfun.com/datasheets/Wireless/General/

RFM69HCW-V1.1.pdf

17. Carolyn Ellis, Tony E Adams, and Arthur P Bochner.2011. Autoethnography: an overview. Historical SocialResearch/Historische Sozialforschung (2011), 273–290.

18. Sam Ferguson, Andrew Vande Moere, and DensilCabrera. 2005. Seeing sound: Real-time soundvisualisation in visual feedback loops used for trainingmusicians. In Information visualisation, 2005.proceedings. ninth international conference on. IEEE,97–102.

19. James Fogarty, Carolyn Au, and Scott E Hudson. 2006.Sensing from the basement: a feasibility study ofunobtrusive and low-cost home activity recognition. InProceedings of the 19th annual ACM symposium on Userinterface software and technology. ACM, 91–100.

20. Alex Genov, Mark Keavney, and Todd Zazelenchuk.2009. Usability testing with real data. Journal ofUsability Studies 4, 2 (2009), 85–92.

21. Jayavardhana Gubbi, Rajkumar Buyya, Slaven Marusic,and Marimuthu Palaniswami. 2013. Internet of Things(IoT): A vision, architectural elements, and futuredirections. Future generation computer systems 29, 7(2013), 1645–1660.

22. William R Hazlewood, Erik Stolterman, and KayConnelly. 2011. Issues in evaluating ambient displays inthe wild: two case studies. In Proceedings of the SIGCHIconference on Human factors in computing systems.ACM, 877–886.

23. Steven Houben, Connie Golsteijn, Sarah Gallacher, RoseJohnson, Saskia Bakker, Nicolai Marquardt, Licia Capra,and Yvonne Rogers. 2016. Physikit: Data engagementthrough physical ambient visualizations in the home. InProceedings of the 2016 CHI Conference on HumanFactors in Computing Systems. ACM, 1608–1619.

24. Adafruit Industries. 2017a. Adafruit Feather 32u4 withRFM69HCW Packet Radio - 433MHz ID: 3077. (2017).https://www.adafruit.com/product/3077

25. Adafruit Industries. 2017b. Adafruit Feather M0RFM69HCW Packet Radio - 433MHz ID: 3177. (2017).https://www.adafruit.com/product/3177

26. Adafruit Industries. 2017c. Adafruit I2S MEMSMicrophone Breakout - SPH0645LM4H ID: 3421.(2017). https://www.adafruit.com/product/3421

27. Adafruit Industries. 2017d. Adafruit Silicon MEMSMicrophone Breakout - SPW2430 ID: 2716. (2017).https://www.adafruit.com/product/2716

28. Adafruit Industries. 2017e. Arduino Zero - 32 bit CortexM0 Arduino with Debug Interface ID: 2843. (2017).https://www.adafruit.com/product/2843

29. Michael O Jewell, Enrico Costanza, and JacobKittley-Davies. 2015. Connecting the things to theinternet: an evaluation of four configuration strategies forwi-fi devices with minimal user interfaces. In Proceedingsof the 2015 ACM International Joint Conference onPervasive and Ubiquitous Computing. ACM, 767–778.

30. Anne Kaikkonen, Aki Kekalainen, Mihael Cankar, TittiKallio, and Anu Kankainen. 2005. Usability testing ofmobile applications: A comparison between laboratoryand field testing. Journal of Usability studies 1, 1 (2005),4–16.

31. Karin Kappel and Thomas Grechenig. 2009. Show-me:water consumption at a glance to promote waterconservation in the shower. In Proceedings of the 4thinternational conference on persuasive technology. ACM,26.

32. Stacey Kuznetsov and Eric Paulos. 2010. UpStream:motivating water conservation with low-cost water flowsensing and persuasive displays. In Proceedings of theSIGCHI Conference on Human Factors in ComputingSystems. ACM, 1851–1860.

33. Heidi Lam, Enrico Bertini, Petra Isenberg, CatherinePlaisant, and Sheelagh Carpendale. 2012. Empiricalstudies in information visualization: Seven scenarios.IEEE transactions on visualization and computergraphics 18, 9 (2012), 1520–1536.

34. Nuria Lambea Llop. 2017. A policy approach to theimpact of tourist dwellings in condominiums andneighbourhoods in Barcelona. Urban Research &Practice 10, 1 (2017), 120–129.

35. Henrik Lanng, Anders Lykkegaard, Søren Thornholm,Philip Tzannis, and Nervo Verdezoto. 2016. PeaceKeeper:Augmenting the Awareness and Communication of NoisePollution in Student Dormitories. In Proceedings of the7th Augmented Human International Conference 2016.ACM, 39.

36. Youn-kyung Lim, Apurva Pangam, Subashini Periyasami,and Shweta Aneja. 2006. Comparative analysis ofhigh-and low-fidelity prototypes for more valid usabilityevaluations of mobile devices. In Proceedings of the 4thNordic conference on Human-computer interaction:changing roles. ACM, 291–300.

37. Christopher A Monk, J Gregory Trafton, and Deborah ABoehm-Davis. 2008. The effect of interruption durationand demand on resuming suspended goals. Journal ofExperimental Psychology: Applied 14, 4 (2008), 299.

38. Jakob Nielsen. 1995. 10 usability heuristics for userinterface design. Fremont: Nielsen NormanGroup.[Consult. 20 maio 2014]. Disponıvel na Internet(1995).

39. Donald A Norman. 1993. Things that make us smart.(1993).

40. Wei Peng, Matthew O Ward, and Elke A Rundensteiner.2004. Clutter reduction in multi-dimensional data

visualization using dimension reordering. In InformationVisualization, 2004. INFOVIS 2004. IEEE Symposium on.IEEE, 89–96.

41. Charith Perera, Arkady Zaslavsky, Peter Christen, andDimitrios Georgakopoulos. 2014. Context awarecomputing for the internet of things: A survey. IEEECommunications Surveys & Tutorials 16, 1 (2014),414–454.

42. Diana Pope. 2010. Decibel levels and noise generators onfour medical/surgical nursing units. Journal of clinicalnursing 19, 17-18 (2010), 2463–2470.

43. Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011.Interaction design: beyond human-computer interaction.John Wiley & Sons.

44. Flora Salim and Usman Haque. 2015. Urban computingin the wild: A survey on large scale participation andcitizen engagement with ubiquitous computing, cyberphysical systems, and Internet of Things. InternationalJournal of Human-Computer Studies 81 (2015), 31–48.

45. Frank Stajano and Ross Anderson. 2002. The resurrectingduckling: security issues for ubiquitous computing.Computer 35, 4 (2002), supl22–supl26.

46. Matthias Stevens and Ellie DHondt. 2010.Crowdsourcing of pollution data using smartphones. InWorkshop on Ubiquitous Crowdsourcing.

47. Janet Van Der Linden, Rose Johnson, Jon Bird, YvonneRogers, and Erwin Schoonderwaldt. 2011. Buzzing toplay: lessons learned from an in the wild study ofreal-time vibrotactile feedback. In Proceedings of theSIGCHI Conference on Human factors in ComputingSystems. ACM, 533–542.

48. Graham F Welch, C Rush, and DM Howard. 1988. TheSINGAD (SINGing Assessment and Development)system: First applications in the classroom. Proceedingsof the Institute of Acoustics 10, 2 (1988), 179–185.

49. Paweł Wozniak, Andrzej Romanowski, Filip Proborszcz,Martyna Borkowska, Dominik Stozek, and BartoszKoczorowicz. 2012. Maintaining proper conditions inquiet study rooms with ambient influence. In Proceedingsof the 7th Nordic Conference on Human-ComputerInteraction: Making Sense Through Design. ACM,787–788.

50. Nathan Yau. 2012. Visualize this! John Wiley & Sons.

51. Ji Soo Yi, Youn-ah Kang, John T Stasko, and Julie AJacko. 2008. Understanding and characterizing insights:how do people gain insights using informationvisualization?. In Proceedings of the 2008 Workshop onBEyond time and errors: novel evaLuation methods forInformation Visualization. ACM, 4.