8
1 MusicID: A Brainwave-based User Authentication System for Internet of Things Jinani Sooriyaarachchi, Suranga Seneviratne, Kanchana Thilakarathna, and Albert Y. Zomaya Abstract—We propose MusicID, an authentication solution for smart devices that uses music-induced brainwave patterns as a behavioral biometric modality. We experimentally evaluate MusicID using data collected from real users whilst they are listening to two forms of music; a popular English song and individual’s favorite song. We show that an accuracy over 98% for user identification and an accuracy over 97% for user verification can be achieved by using data collected from a 4- electrode commodity brainwave headset. We further show that a single electrode is able to provide an accuracy of approximately 85% and the use of two electrodes provides an accuracy of approximately 95%. As already shown by commodity brain- sensing headsets for meditation applications, we believe including dry EEG electrodes in smart-headsets is feasible and MusicID has the potential of providing an entry point and continuous authentication framework for upcoming surge of smart-devices mainly driven by Augmented Reality (AR)/Virtual Reality (VR) applications. Index Terms—Behavioural Biometrics, Smart Sensing, EEG, Authentication, IoT This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible I. I NTRODUCTION S MART head-mounted devices of various kinds such as smart glasses, mixed reality headsets, and smart earbuds, are becoming increasingly popular in engaging and control- ling smart Internet of Things (IoT) environments. This is in addition to the commonly used Bluetooth headsets which are the most sold wearable globally by a significant margin [1]. Moreover, the rapid growth of mixed reality in multiple application domains such as gaming, education, and health will also ensure that the usage of smart-headsets, especially the ones that are capable of standalone operation will continue to increase. Secure and robust user authentication on such headsets is of paramount importance due to unprecedented sensitivity of information that is sensed, stored, and transmit- ted by these devices such as health-care data, financial records, user location, and surrounding environmental information. In industrial and health-care settings, it is necessary to make sure that these devices are worn by the authorized personnel only. Also, in personal smart-home settings, authentication of the user enables personalized service delivery. Existing methods of authentication such as passwords and PINs are not observation resistant and less secure due to reuse. Attempts to increase the complexity of passwords by adding more constraints on character, numeral, and symbol combinations, reduce the usability. Other alternative of static J. Sooriyaarachchi was with Data61-CSIRO, Australia. S. Seneviratne, K. Thilakarathna and A. Y. Zomaya are with School of Computer Science, The University Sydney, Australia. biometrics such as fingerprints and face IDs are prone to replay attacks and spoofing [2]. As a result, behavioral biometrics are emerging as an exciting new alternative. Behavioral biometrics focus on behavior modalities that are potentially unique to in- dividuals in the likes of typing patterns [3], touch patterns [4], gait [5], and heart rate [6]. Behavioral biometrics are more secure as they are difficult to mimic, record, and synthesize and can readily be used for implicit authentication [7]. The main challenge in behavioral biometrics is finding modalities that can be captured non-intrusively and are sufficiently consistent over time so that they can be used for user re-identification. Incorporating behavioral biometrics to standalone Internet of Things devices such as smart-headsets is further challenging due to their limited user interaction capabilities. In this paper, we propose MusicID, a novel behavioral bio- metric for user authentication for Internet of Things environ- ments, which measures the emotional behavior and reactions of the user to music through EEG (Electroencephalogram) signals. Recent advances in head-mounted wearables made non-invasive EEG capture possible through soft dry electrodes as demonstrated by commodity headsets such as Muse [8] and Neurosky [9]. These sensors can be easily integrated with smart-headsets that are increasingly becoming popular in both smart-home and industrial Internet of Things. The possibility of using brainwave patterns for authentication makes inroads towards a unique and hard to spoof behavioral biometric modality that can be useful not only for smart-headsets such as Microsoft Holo Lens and Oculus Rift, but also in general as a universal authentication solution. We make following contributions in this paper. We show that the music stimulated EEG patterns are potentially unique to individuals by collecting EEG data from 20 users over one and half months using Muse brain sensing headband. We measure brain reactions to each individuals’ favorite song as well as a reference song and show that our observations are statistically significant through an ANOVA test (p< 0.0001). We develop Random Forest classifiers for both user identification and user verification tasks using features extracted from decomposed EEG signals and show that over 97% accuracy can be achieved. We investigate the trade-off between accuracy and the number of electrodes and frequency bands. Our results show that use of two front electrodes gives an accuracy of 94%. Also, we show that all four brainwave frequency bands are necessary to have a significant accuracy. The rest of the paper is organized as follows. In Section arXiv:2006.01751v1 [cs.CR] 2 Jun 2020

MusicID: A Brainwave-based User Authentication System for

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MusicID: A Brainwave-based User Authentication System for

1

MusicID: A Brainwave-based User AuthenticationSystem for Internet of Things

Jinani Sooriyaarachchi, Suranga Seneviratne, Kanchana Thilakarathna, and Albert Y. Zomaya

Abstract—We propose MusicID, an authentication solution forsmart devices that uses music-induced brainwave patterns asa behavioral biometric modality. We experimentally evaluateMusicID using data collected from real users whilst they arelistening to two forms of music; a popular English song andindividual’s favorite song. We show that an accuracy over 98%for user identification and an accuracy over 97% for userverification can be achieved by using data collected from a 4-electrode commodity brainwave headset. We further show that asingle electrode is able to provide an accuracy of approximately85% and the use of two electrodes provides an accuracy ofapproximately 95%. As already shown by commodity brain-sensing headsets for meditation applications, we believe includingdry EEG electrodes in smart-headsets is feasible and MusicIDhas the potential of providing an entry point and continuousauthentication framework for upcoming surge of smart-devicesmainly driven by Augmented Reality (AR)/Virtual Reality (VR)applications.

Index Terms—Behavioural Biometrics, Smart Sensing, EEG,Authentication, IoT

This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer beaccessible

I. INTRODUCTION

SMART head-mounted devices of various kinds such assmart glasses, mixed reality headsets, and smart earbuds,

are becoming increasingly popular in engaging and control-ling smart Internet of Things (IoT) environments. This is inaddition to the commonly used Bluetooth headsets which arethe most sold wearable globally by a significant margin [1].Moreover, the rapid growth of mixed reality in multipleapplication domains such as gaming, education, and healthwill also ensure that the usage of smart-headsets, especiallythe ones that are capable of standalone operation will continueto increase. Secure and robust user authentication on suchheadsets is of paramount importance due to unprecedentedsensitivity of information that is sensed, stored, and transmit-ted by these devices such as health-care data, financial records,user location, and surrounding environmental information. Inindustrial and health-care settings, it is necessary to make surethat these devices are worn by the authorized personnel only.Also, in personal smart-home settings, authentication of theuser enables personalized service delivery.

Existing methods of authentication such as passwords andPINs are not observation resistant and less secure due toreuse. Attempts to increase the complexity of passwords byadding more constraints on character, numeral, and symbolcombinations, reduce the usability. Other alternative of static

J. Sooriyaarachchi was with Data61-CSIRO, Australia.S. Seneviratne, K. Thilakarathna and A. Y. Zomaya are with School of

Computer Science, The University Sydney, Australia.

biometrics such as fingerprints and face IDs are prone to replayattacks and spoofing [2]. As a result, behavioral biometrics areemerging as an exciting new alternative. Behavioral biometricsfocus on behavior modalities that are potentially unique to in-dividuals in the likes of typing patterns [3], touch patterns [4],gait [5], and heart rate [6]. Behavioral biometrics are moresecure as they are difficult to mimic, record, and synthesize andcan readily be used for implicit authentication [7]. The mainchallenge in behavioral biometrics is finding modalities thatcan be captured non-intrusively and are sufficiently consistentover time so that they can be used for user re-identification.Incorporating behavioral biometrics to standalone Internet ofThings devices such as smart-headsets is further challengingdue to their limited user interaction capabilities.

In this paper, we propose MusicID, a novel behavioral bio-metric for user authentication for Internet of Things environ-ments, which measures the emotional behavior and reactionsof the user to music through EEG (Electroencephalogram)signals. Recent advances in head-mounted wearables madenon-invasive EEG capture possible through soft dry electrodesas demonstrated by commodity headsets such as Muse [8]and Neurosky [9]. These sensors can be easily integrated withsmart-headsets that are increasingly becoming popular in bothsmart-home and industrial Internet of Things. The possibilityof using brainwave patterns for authentication makes inroadstowards a unique and hard to spoof behavioral biometricmodality that can be useful not only for smart-headsets suchas Microsoft Holo Lens and Oculus Rift, but also in generalas a universal authentication solution.

We make following contributions in this paper.

• We show that the music stimulated EEG patterns arepotentially unique to individuals by collecting EEG datafrom 20 users over one and half months using Muse brainsensing headband. We measure brain reactions to eachindividuals’ favorite song as well as a reference songand show that our observations are statistically significantthrough an ANOVA test (p < 0.0001).

• We develop Random Forest classifiers for both useridentification and user verification tasks using featuresextracted from decomposed EEG signals and show thatover 97% accuracy can be achieved.

• We investigate the trade-off between accuracy and thenumber of electrodes and frequency bands. Our resultsshow that use of two front electrodes gives an accuracyof 94%. Also, we show that all four brainwave frequencybands are necessary to have a significant accuracy.

The rest of the paper is organized as follows. In Section

arX

iv:2

006.

0175

1v1

[cs

.CR

] 2

Jun

202

0

Page 2: MusicID: A Brainwave-based User Authentication System for

2

TABLE I: Summary of different brainwavesType Frequency ActivitiesDelta 0.5-4.0 Hz Deep sleepTheta 4.0-7.5 Hz Meditation, Light sleep, Vivid visualizations,

Creativity, Emotional connection, RelaxationAlpha 7.5-12.0 Hz Deep relaxation with eyes closed, State of con-

centration, Memorizing, Learning, VisualizationBeta 12.0-30.0 Hz Normal waking conditions, Daily consciousnessGamma 30.0-100.0 Hz High level information processing in improved

memory conditions, Sudden insights

AF7 AF8

TP9 TP10

Fp1 Fpz Fp2

3xReferenceSensors

2xForeheadSensors

2xBack-of-theEarSensors

Fp1 Fpz Fp2

AF7 AF8

TP9 TP10

Nasion

Inion

Fig. 1: Standard 10-20 system

II, we provide background information on EEG and brainbehaviors. Section III describes our methodology and SectionIV presents results. We discuss related work in Section Vfollowed by a discussion of implications, limitations, andfuture work in Section VI. Section VII concludes the paper.

II. BACKGROUND

EEG (Electroencephalogram) measures the electrical activ-ities in human brain. EEG data is captured by measuringvoltage difference between two electrodes placed on the scalp.More specifically EEG data consist of inhibitory and excitatorypost-synaptic potentials generated in pyramidal cells of thebrain cortex depending on the mental activity. Whenever thebrain receives stimuli from our senses such as visual, audio,touch, pain, taste, and smell, EEG signals vary in magnitudeas well as in frequency. EEG brain signals can be decomposedinto five based on their frequencies; Alpha, Beta, Theta, Delta,and Gamma. Each of these frequency bands has its owncharacteristics depending on the brain activity and the stateof consciousness. Table I summarizes the frequencies of eachbrainwave band and activities that generate them.

Brain structure of every human being is unique and highlyinfluenced by DNA [10]. Also, how a person’s brain respondsto a stimulus is highly dependent on that person’s previousexperiences. As such, brainwave patterns generated for someactivities can potentially be used as an authentication modalityas they can be unique to individuals and difficult to spoof.

Brainwaves can be measured using invasive techniques inwhich electrodes are placed under the scalp or non-invasivetechniques in which electrodes are placed above the scalp.Scalp electrodes can detect faint electrical signals on thesurface of the head that represent the underlying neural ac-tivity. Figure 1 shows the standard 10-20 electrode locationscommonly used to place electrodes on the scalp.

Various studies found changes in brainwaves induced by theemotions and aesthetic pleasure of music. These include Beta

DataCollection

FavoriteSong

SameSong20 Users

Pre-Processing

Framing

FeatureExtraction

Classification

UserVerificationOnevsrestclassification

UserIdentificationMulti-classclassification

IdentifiedUserID

User ID Brainwaves

VerifiedUserID

RejectedVerificationModel

IdentificationModel

ModelDevelopment1

Evaluation2

Brainwaves +

Fig. 2: Methodology

bands playing a major role in music processing [11],Gammabrainwaves confined to subjects with music training [12], andhigh Theta brainwave power in frontal midline in contrast topleasant and unpleasant music [13]. Motivated by these back-ground information, in this paper, we explore the feasibilityof using music-induced brainwaves as a behavioral biometricmodality.

III. METHODOLOGY

In this section we describe our experiment design anddata collection process followed by data pre-processing steps,features used, and classifier details. The overall summary ofour methodology is illustrated in Figure 2.

A. Data Collection

We recruited 20 volunteers (11 females and 9 males, agedbetween 20 and 40) to participate in our experiment. Theparticipants were students and research staff associated withData61, CSIRO. The duration of the experiment spanned overtwo months and there was a minimum one week gap betweentwo sessions of a single participant. Each user participatedin at least two sessions. Our experiment was approved by theCSIRO Human Research Ethics Committee (Ref. No. 104/17).

During each session the participants performed two tasks;i) listening to a popular English song1 and ii) listeningto individual’s favorite song whilst wearing Muse [8] brainsensing headset. Muse headset is equipped with 4 electrodesin the standard 4-channel configuration (TP9, AF7, AF8, TP10as shown in Figure 1). The duration of the each experimentwas 150 seconds and the participants closed their eyes duringthis experiment. This was done due to two main reasons; i)avoid distractions and head movements that may affect themeasurements, and ii) eye blinks are known to impact the EEGreadings [14] as a result of the large potentials evoked aboveand below the eye during eye blinks and eye movements.

In the Muse Monitor application [15] running in a GooglePixel phone, we selected a sampling rate of 220 Hz anda recording interval of 0.5 seconds. The recording intervaldefines when the data is recorded to the output .csv file. Thus,at the end of each session with this recording interval, 300 datasamples were collected for the task of listening to the referencesong and another 300 data samples for the task of listeningto favorite songs. Each sample included 24 readings; absolute

1https://www.youtube.com/watch?v=JRfuAukYTKg

Page 3: MusicID: A Brainwave-based User Authentication System for

3

brainwave values of Alpha, Beta, Theta, Delta, Gamma, andraw EEG (without separating into the sub bands) from each of4-channels. Figure 3 shows the setup of the experiment and inFigure 4 we show example brainwave data streams of a usercollected from AF7 electrode for two experiment scenarios.

Fpz

TP9

TP10

AF8

AF7

Fig. 3: Experimental setup

0 50 100 150 200 250 300−1

−0.5

0

0.5

1

1.5

2

Samples

Abs

olut

e P

ower

(dB

)

Theta Alpha Beta Gamma

(a) Listening to Favorite Song

0 50 100 150 200 250 300−1

−0.5

0

0.5

1

1.5

2

Samples

Abs

olut

e P

ower

(dB

)

Theta Alpha Beta Gamma

(b) Listening to Same SongFig. 4: Example brainwave data streams from AF7

B. Data pre-processing and features

Since Delta brainwave frequencies are related to deep sleep,and not significant in our experimental tasks, we removed 4absolute Delta brainwave readings from 4-channels and usedthe remaining 20 readings.

We framed 300 samples from each session into 14 overlap-ping frames of 50% overlap as shown in Figure 5. For eachframe we had 20 readings for Alpha, Beta, Theta, Gamma,and raw EEG values coming from each of the four electrodes.For each reading we calculated the mean, maximum, minimum,and zero crossing rate (ZCR) giving us 80 features for eachframe. Thus, one listening task in a session gave us 14 samplesof data that are of size 1× 80.

C. Classifier

We built Random Forest classifiers for two authenticationsettings; user identification and user verification. The choiceof Random Forest is empirically decided based on the fact that

Frame3

Frame2

0 20 40 60 80 120 160 200 240 260 280 300

Frame13

Frame14

1 x 80 Feature Vector

4x Sensors

AF7

AF8

TP9

TP10

Theta

Alpha

Beta

Gamma

RawEEG

5x Waveform types 4x Features

Min

Max

Mean

ZCR

Fig. 5: Framing 300 samples to 14 frames

it performed better compared to Support Vector Machines andLogistic Regression, which is evaluated in Section IV-B.

For user identification, we used Random Forest classifiersin a multi-class setting. For user verification we used RandomForest classifiers in One-vs-Rest configuration where a clas-sifier is built on per user basis that predicts whether a givenframe belongs a particular user or not. To avoid over-fittingwe performed 10-fold cross validation. We used 80% of thesamples from each session for training and the rest of the20% for testing. We avoided two consecutive feature vectorsgoing into training and test sets since there is a 50% overlapand allowing it would cause information transfer from thetraining set to the test set, artificially increasing the classifierperformance. Table II shows the number of samples for eachuser in training and test sets. This training and test datasetsizes are identical for both experiment scenarios.

TABLE II: Summary of training and test datasetsUsers No. of sessions Training set size Test set sizeUser 1-5 5 55 15User 6 4 44 12User 7-11 3 33 9User 12-20 2 22 6Total 682 186

IV. RESULTS

First, we present an analysis of our features using hypothesistesting followed by the classifier performance for user iden-tification and verification tasks. Then, we further analyze thedata to identify the most important features that differentiateusers. Next, we provide results to show how the placementof electrodes affect the accuracy and whether we can useonly a certain type of brainwaves for authentication. Finally,we do cross song training and testing to check whether theuser responds to music in general or there are differences inresponses when listening to favorite music and other music.

A. Statistical hypothesis testing

To check whether the features we extracted from brainwavesare specific to individual users, we conducted a statisticalhypothesis testing under the null hypothesis; “features ex-tracted from brainwaves do not have a relationship to thecorresponding user” using ANOVA (Analysis of Variance).The resulting F − test value was 22.462 which corresponds

Page 4: MusicID: A Brainwave-based User Authentication System for

4

to a p value less than 0.0001, indicating that null hypothesiscan be omitted and the features indeed have a statisticalrelationship to the individual users.

B. Classifier performance

i) User identification: In user identification, given a brainwavesample the classifier predicts whose sample is that from aclosed set of users. We trained Random Forest classifiers atdifferent maximum depth levels using 10-fold cross validationfor the two experiment scenarios. That is, we used the trainingdataset from listening to the same song and tested the modelson the test set built using the same experiment scenario.Table III shows the performance of the classifiers.

TABLE III: User identification: accuracy on test setMaximumTree Depth

Listening tosame song

Listening tofavorite song

5 86.56% 86.02%10 98.39% 98.39%15 98.39% 99.46%

We obtained an accuracy of 98.39% for listening to thesame song and an accuracy of 99.46% for listening to thefavorite song with Random Forest classifier with a maximumtree depth of 15. This corresponds only to 3 (1.61%) and1 (0.54%) incorrectly classified samples in the test set. Theaccuracy does not increase further along with the tree depth.ii) User verification: In user verification the classifier is givenwith a sample and an user ID and needs to make a decisionwhether the given sample belongs to the said user or not. Weshow the performance of the classifier in Table IV. We wereable to achieve an accuracy of 98.92% for listening to the samesong and an accuracy of 97.31% for listening to the favoritesong with a maximum tree depth of 10 for the Random Forestclassifier in One-vs-Rest configuration. We highlight that theclassifier performance does not increase after the maximumdepth of 10 and it is always good to select a classifier withlower depth given the choice to have a better generalization.

TABLE IV: User verification: accuracy on test setMaximumTree Depth

Listening tosame song

Listening tofavorite song

5 98.39% 96.77%10 98.92% 97.31%15 98.92% 97.31%

In Figure 6 we show some example confusion matrices.Figure 6a shows the confusion matrix of the best classifierfor user identification using the favorite song that resulted anaccuracy of 99.46%. The only error it makes is, it identifies onesample from User-20 as User-16. Similarly, Figure 6b showthe best performing classifier for user verification using thefavorite song that gave an accuracy of 97.31%. Note that thereis no significant difference in accuracy in the two experimentscenarios; listening to same song and listening one’s favoritesong. We analyze this further in Section IV-F.

Finally, as mentioned in Section III-C we selected RandomForest classifier because it was the best performing model outof the models we tested. For example, the SVM classifierachieved only 86.56% and 82.26% for user identification andverification tasks respectively whilst listening to the favorite

(a) User identification (b) User verification

Fig. 6: Confusion matrices of best classifiers (favorite song)

song. For the task of listening to same song we were able toobtain an accuracy of 83.87% for user identification and anaccuracy of 86.55% for user verification.

C. Feature importance & characteristics

To further understand the classifier performance we next doa feature importance analysis. We used the gini impurity basedfeature importance generated using Random Forest classifiers.

0 2 4 6 8 10 12 14 16 18 20Feature Index

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04Fe

atur

e Im

porta

nce

0 10 20 30 40 50 60 70 800

0.01

0.02

0.03

0.04

Top-20 features by importance

Index Feature Index Feature1 Mean Alpha TP9 11 Min Beta TP92 Mean Alpha AF7 12 Mean Gamma AF73 Mean Alpha TP10 13 Max Alpha TP104 Mean Beta TP9 14 Min Alpha TP105 Max Alpha TP9 15 Min Raw TP106 Mean Theta AF8 16 Mean Gamma TP107 Mean Theta AF7 17 Mean Gamma AF88 Mean Alpha AF8 18 Mean Beta AF79 Max Raw TP10 19 Mean Theta TP1010 Mean Beta TP10 20 Mean Beta AF8

[4] S. Nakamura, N. Sadato, T. Oohashi, E. Nishina, Y. Fuwamoto,and Y. Yonekura, “Analysis of musicâASbrain interaction withsimultaneous measurement of regional cerebral blood flow andelectroencephalogram beta rhythm in human subjects,” NeuroscienceLetters, vol. 275, no. 3, pp. 222 – 226, 1999. [Online]. Available:http://www.sciencedirect.com/science/article/pii/S0304394099007661

[5] J. Bhattacharya and H. Petsche, “Musicians and the gamma band: asecret affair?” NeuroReport, vol. 12, no. 2, pp. 371–374, 2001.

[6] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music andemotion: electrophysiological correlates of the processing of pleasantand unpleasant music,” Psychophysiology, vol. 44, no. 2, pp. 293–304,2007.

[7] M. Iwasaki, C. Kellinghaus, A. Alexopoulos, R. Burgess, A. Kumar,Y. Han, H. LÃijders, and R. Leigh, “Effects of eyelid closure, blinks,and eye movements on the electroencephalogram,” Clinical Neurophys-iology, vol. 116, no. 4, pp. 878–885, 4 2005.

[8] D. Gafurov, K. Helkala, and T. Søndrol, “Biometric gait authenticationusing accelerometer sensor.” JCP, vol. 1, no. 7, pp. 51–59, 2006.

[9] M. O. Derawi, P. Bours, and K. Holien, “Improved cycle detectionfor accelerometer based gait authentication,” in Intelligent InformationHiding and Multimedia Signal Processing (IIH-MSP), 2010 Sixth Inter-national Conference on. IEEE, 2010, pp. 312–317.

[10] N. Sae-Bae, K. Ahmed, K. Isbister, and N. Memon, “Biometric-richgestures: a novel approach to authentication on multi-touch devices,” inProceedings of the SIGCHI Conference on Human Factors in ComputingSystems. ACM, 2012, pp. 977–986.

[11] M. Frank, R. Biedert, E. Ma, I. Martinovic, and D. Song, “Touchalytics:On the applicability of touchscreen input as a behavioral biometric forcontinuous authentication,” IEEE transactions on information forensicsand security, vol. 8, no. 1, pp. 136–148, 2013.

[12] J. Chauhan, Y. Hu, S. Seneviratne, A. Misra, A. Seneviratne, andY. Lee, “Breathprint: Breathing acoustics-based user authentication,” inProceedings of the 15th Annual International Conference on MobileSystems, Applications, and Services. ACM, 2017, pp. 278–291.

[13] J. Chauhan, S. Seneviratne, Y. Hu, A. Misra, A. Seneviratne, and Y. Lee,“Breathrnnet: Breathing based authentication on resource-constrained iotdevices using rnns,” arXiv preprint arXiv:1709.07626, 2017.

[14] C. X. Zhao, T. Wysocki, F. Agrafioti, and D. Hatzinakos, “Securinghandheld devices and fingerprint readers with ecg biometrics,” in Bio-metrics: Theory, Applications and Systems (BTAS), 2012 IEEE FifthInternational Conference on. IEEE, 2012, pp. 150–155.

[15] M. Loulakis, G. Blatsios, C. Vrettou, and I. Kominis, “Quantum bio-metrics with retinal photon counting,” arXiv preprint arXiv:1704.04367,2017.

[16] E. S. Finn, X. Shen, D. Scheinost, M. D. Rosenberg, J. Huang, M. M.Chun, X. Papademetris, and R. T. Constable, “Functional connectomefingerprinting: identifying individuals using patterns of brain connectiv-ity,” Nature neuroscience, vol. 18, no. 11, pp. 1664–1671, 2015.

[17] M. Phothisonothai, “An investigation of using ssvep for eeg-based userauthentication system,” in Signal and Information Processing Asso-ciation Annual Summit and Conference (APSIPA), 2015 Asia-Pacific.IEEE, 2015, pp. 923–926.

[18] M. V. Ruiz-Blondet, Z. Jin, and S. Laszlo, “Cerebre: A novel methodfor very high accuracy event-related potential biometric identification,”IEEE Transactions on Information Forensics and Security, vol. 11, no. 7,pp. 1618–1629, 2016.

[19] G. Bajwa and R. Dantu, “Neurokey: Towards a new paradigm of can-

celable biometrics-based key generation using electroencephalograms,”Computers & Security, vol. 62, pp. 95–113, 2016.

[20] J. Chuang, H. Nguyen, C. Wang, and B. Johnson, “I think, thereforei am: Usability and security of authentication using brainwaves,” inInternational Conference on Financial Cryptography and Data Security.Springer, 2013, pp. 1–16.

[21] J. Sohankar, K. Sadeghi, A. Banerjee, and S. K. Gupta, “E-bias: A perva-sive eeg-based identification and authentication system,” in Proceedingsof the 11th ACM Symposium on QoS and Security for Wireless andMobile Networks. ACM, 2015, pp. 165–172.

[22] B. Kaur, D. Singh, and P. P. Roy, “A novel framework of eeg-based useridentification by analyzing music-listening behavior,” Multimedia Toolsand Applications, pp. 1–22, 2016.

[23] S. Seneviratne, Y. Hu, T. Nguyen, G. Lan, S. Khalifa, K. Thilakarathna,M. Hassan, and A. Seneviratne, “A survey of wearable devices andchallenges,” IEEE Communications Surveys Tutorials, vol. 19, no. 4,pp. 2573–2620, 2017.

[24] S. Li, A. Ashok, Y. Zhang, C. Xu, J. Lindqvist, and M. Gruteser, “Whosemove is it anyway? authenticating smart wearable devices using uniquehead movement patterns,” in Pervasive Computing and Communications(PerCom), 2016 IEEE International Conference on. IEEE, 2016, pp.1–9.

[25] J. Chauhan, H. J. Asghar, A. Mahanti, and M. A. Kaafar, “Gesture-based continuous authentication for wearable devices: The smart glassesuse case,” in International Conference on Applied Cryptography andNetwork Security. Springer, 2016, pp. 648–665.

Fig. 7: Significance of individual features

Figure 7 shows the distribution of feature significance andTable V lists the top-20 features for the favorite song scenario.The same song scenario shows a similar behavior with only2 changes in top 20 features. The mean values of Alphabrainwaves obtained from all 4 electrodes show a high featureimportance because the experiment subjects are closing theireyes and listening to the songs, making the Alpha brainwaveintensity higher as seen in Figure 4. Noticeably there are onlythree features coming from Gamma waves which is potentiallydue to their low intensities as previously shown in Figure 4.

We show box plots of two example features of highestimportance; mean of the Alpha waves measured at electrodeposition TP9 in Figure 8a and mean of the Alpha wavesmeasured at electrode position AF7 in Figure 8b. As Figure 8ashows, for many of the users the signal strength of Alphawaves vary in a limited range in TP9 electrode with theexception of User-14, User-18 and User-10. Figure 8b showsthat in AF7 electrode the Alpha wave signal strength variesmore in number of users such as User-6, User-8, and User-14 compared to TP9 electrode. This is potentially due to the

Page 5: MusicID: A Brainwave-based User Authentication System for

5

variation in regional activity of the brain stimulated by musicand the fact that TP9 electrode is relatively closer to theauditory cortex than AF7 electrode.

The two least important features; zero crossing rate (ZCR)of Alpha wave from TP9 and zero crossing rate (ZCR) ofraw EEG from TP9 consist of mainly zeros. We show thedistribution of the third least important feature in Figure 9. Asthe figure shows, for many users mean raw EEG from TP10behaves similar (i.e. less predictive power).

TABLE V: Top-20 features by importanceIndex Feature Index Feature

1 Mean Alpha TP9 11 Min Beta TP92 Mean Alpha AF7 12 Mean Gamma AF73 Mean Alpha TP10 13 Max Alpha TP104 Mean Beta TP9 14 Min Alpha TP105 Max Alpha TP9 15 Min Raw TP106 Mean Theta AF8 16 Mean Gamma TP107 Mean Theta AF7 17 Mean Gamma AF88 Mean Alpha AF8 18 Mean Beta AF79 Max Raw TP10 19 Mean Theta TP1010 Mean Beta TP10 20 Mean Beta AF8

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20User ID

Mea

n A

lpha

TP

9 (d

B)

(a) Mean Alpha-TP9

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20User ID

Mea

n A

lpha

AF

7 (d

B)

(b) Mean Alpha-AF7

Fig. 8: User-wise distribution of example significant featuresD. Electrode placement

We next obtain the accuracy for using the data collectedfrom individual electrodes and electrode combination scenar-ios of two front electrodes and two rear electrodes. We usedselected electrodes’ features from 80 feature vector of thetotal dataset given in Table II. In Figure 10a and Figure 10bwe show the accuracies for the two experiment scenariosfor both user verification and user identification. As expectedwhen the number of electrodes decreases the accuracy dropsand results show that there are no significant differences inaccuracy between individual electrodes. However, combiningtwo electrodes provides an accuracy of approximately 95%which is only a 4% difference to the all-electrode accuracy.This is a promising result as, it is easier to include twoelectrodes in the front of many consumer headsets.

800

810

820

830

840

850

860

870

880

890

900

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20User ID

Mea

n R

aw

TP

10 (

µv)

Fig. 9: User-wise distribution of a low significant feature

AllData

AF7 AF8 AF7 &AF8

TP10TP9 TP9 &TP10

20

0

40

60

80

100

Electrode PairsSingle Electrode

Accu

racy

(%)

User IdentificationUser Verification

(a) Same song

AllData

AF7 AF8 AF7 &AF8

TP10TP9 TP9 &TP10

20

0

40

60

80

100

Electrode PairsSingle Electrode

Accu

racy

(%)

User IdentificationUser Verification

(b) Favourite song

Fig. 10: Accuracy using different electrodes

E. Significance of different brainwave frequencies

We next analyse whether some brainwaves show moredifferentiation between individual participants. We trainedclassifiers only from data from different brainwave types andsome combinations of them and the accuracies are shown inFigure 11 for the two experiment scenarios. As the figuresshow none of the individual waves gives sufficient accuracy.Also, it is interesting to observe that despite not showing upinto overall top-20 features (c.f Section IV-C) Gamma wavesprovide a performance in par with Alpha and Beta waves.Figures 11a and 11b show that combining Alpha and Betawaves provide a slightly higher performance compared to othercombinations. This can be expected as there were more Alphaand Beta related features in top-20 significant features.

AllData

Alpha Beta Alpha &Beta

ThetaGamma Gamma &Theta

Beta &Gamma

20

0

40

60

80

100

Paired Frequency BandsSingle Frequency Band

Acc

urac

y (%

)

User Identification

User Verification

(a) Same song

AllData

Alpha Beta Alpha &Beta

ThetaGamma Gamma &Theta

Beta &Gamma

20

0

40

60

80

100

Paired Frequency BandsSingle Frequency Band

Acc

urac

y (%

)

User Identification

User Verification

(b) Favourite song

Fig. 11: Accuracy using different brainwave frequencies

F. Cross song and combined data training and testing

Our classifier results shown previously indicated that boththe experiments; listening to same song and favorite song pro-vide equally good results. To check whether there is actually

Page 6: MusicID: A Brainwave-based User Authentication System for

6

a difference in brainwave responses in two scenarios, we dida cross experiment training and testing; i.e. training a modelusing data from listening to same song and testing the model’sperformance on test data from listening to the favorite songand vice versa. We show the results in Table VI. The resultsshow that there is approximately 20%-30% drop in accuracy incross song setting. It is interesting to note that models are stillachieve an accuracy in the range between 64%-79% indicatingthere is some common response to music in the brain.

TABLE VI: Cross song training and testingTested on

User identificationTrained on Favorite Song Same SongFavorite Song 99.46% 79.57%Same Song 72.04% 98.39%User verificationFavorite Song 97.31% 77.96%Same Song 64.52% 98.92%

Finally, we trained a classifier with the combined trainingdataset and tested on different test datasets. For the combinedtest dataset the classifier gave an accuracy of 98.39% and indi-vidual accuracies of 98.92% and 97.85% respectively on samesong and favorite song test datasets for user identification.

G. Summary of resultsIn Table VII we provide a summarized view of all of our

results and we below list the main take away messages.• Individuals can be authenticated accurately using brain-

waves extracted from audio stimulation of the brain andwe were able to achieve accuracies in the range of 97%-99% for both user identification and verification tasks.

• The accuracies drop only by 2%-5% when only twoelectrodes are used. This is promising as it is feasible toinclude two electrodes in the front side of many consumerheadsets.

• Feature importance analysis and frequency band-wisetraining indicated that features derived from Alpha andBeta waveforms have more predictive capability com-pared to other frequency bands.

TABLE VII: Summary of resultsFavorite song Same song

Scenario Identification Verification Identification VerificationAll data 99.46% 97.31% 98.39% 98.92%Electrode positionAF7 86.02% 86.02% 87.63% 86.02%AF8 85.48% 85.48% 80.10% 83.87%TP9 89.78% 89.24% 90.86% 92.01%TP10 89.78% 92.47% 86.55% 90.86%AF7 + AF8 94.62% 94.62% 94.62% 93.54%TP9 + TP10 94.62% 94.62% 94.62% 96.77%Brainwave typeAlpha 79.56% 85.48% 77.96% 82.26%Beta 82.80% 83.87% 80.11% 85.48%Gamma 79.03% 81.72% 79.57% 82.26%Theta 72.58% 72.04% 76.88% 77.96%Alpha + Beta 91.93% 93.54% 94.09% 96.77%Beta + Gamma 90.86% 92.47% 91.40% 92.47%Gamma + Theta 88.17% 91.93% 95.70% 93.01%

V. RELATED WORK

A. Novel behavioral biometric modalitiesLimitations of the passwords/token-based systems as well

as static biometrics such as fingerprints led to a large body

of works exploiting behavioral biometrics as a mean of userauthentication. Such biometrics include gait [5], touch pat-terns [4], breathing acoustics [16], [17], and heart rate [6].For example, Buriro et al. [18] proposed an authenticationmechanism for smartphones using the built-in smartphonesensors to capture the behavioural signatures of the way auser slides the lock button on the screen to unlock the phone,and brings the phone towards ear. True Acceptance Rate of99.35% was achieved using Random Forest classifier with 85users performing the unlocking actions, in sitting, standing,and walking postures. Hadiyoso et al. [6] demonstrated a newbiometric modality using ECG signals from 11 participants.Using empirical mode decomposition (EMD) and statisticalanalysis for feature extraction authors achieved an accuracyof 93.6% using discriminant analysis.

More recently another set of modalities that extend the con-ventional “what you are” definition of behavioral biometricsto “what you perceive”, are becoming popular as they tendto be more resilient, reliable, and non-intrusive. Also, suchbiometrics suit well for IoT devices that have limited userinteractivity.

For instance, Loulakis et al. [19] proposed a quantumbiometric method using single photon detection ability of thehuman retina. Using photon counting principles of human rodvision, authors formed light flash patterns that only a specificindividual can identify and theoretically showed that falsepositive and false negative rates are lower than 10−10 and10−4. Finn et al. [20] demonstrated that functional connec-tivity profiles of the brain observed by fMRI can potentiallybe unique to individuals. Using data collected from 126participants, authors showed that between 87%-99% accuracycan be achieved under different brain activities.

Our work contributes to behavioral biometrics of “what youperceive” category by exploring the feasibility of using musicinduced brainwave patterns as a novel authentication modality.

B. Use of EEG for user authentication

Several work looked different forms of brain stimulationsthat can be used to generate EEG data for user authenticationsuch as visual [21], audio-visual [22], mental tasks [23], [24]as well as resting state [25].

Chiu et al. [21] used eight channel EEG headset (BR8) tocapture brainwaves from 30 subjects. Using visual stimulatedbrainwave data from a set of 15 images (5 familiar, 5 dj vu, and5 unfamiliar) authentication tokens were retrieved and authorsachieved an accuracy of 98.7% using an SVM classifier. Huanget al. [22] used 14 channel Emotive Epoc+ headset to focuson brainwaves evoked by an audio-visual paradigm of subjectsown face image and a voice calling their own name. Withraw EEG collected from 30 adult subjects and a bagging-based ensemble machine learning model authors achieved anaccuracy around 92%.

Chuang et al. [23] used a single channel EEG headset(Neurosky) for capturing brainwaves from 15 subjects. Using 7mental tasks in two repeated sessions, an accuracy of 99% wasachieved with a cosine distance based kNN classifier. Sohankaret al. [25] also used Neurosky headset, but focused on brain

Page 7: MusicID: A Brainwave-based User Authentication System for

7

behavior while the user is in resting state and achieved 95% ac-curacy for user authentication and 80% for user identificationwith 10 different subjects. Similarly, MindID [26] exploitedDelta brainwaves for authentication (98.2% accuracy), whileusers relax with closed eyes using Emotive Epoc+ with 8subjects. In another study using the same device with 12subjects, Jayarathne et al. [27] showed 96.67% accuracy canbe achieved using LDA for classification. Zeynali et al. [28]conducted a similar study using five mental activities andachieved accuracies in the range of 97-98% using NeuralNetwork classifier. Our work is different from these relatedwork as we introduce a new form of non-invasive music basedbrain stimulation method for EEG based user authentication.

Similar to our work, Kaur et al. [29] explored the potentialof using music stimulations for authentication. An experimentwith 60 users listening to 4 genres of music, 97.5% and93.83% of accuracies were reached using Hidden MarkovModels and SVM classifiers respectively. Huang et al. [30]proposed a system targeting IoT applications. In contrast tothis work, we use a different form of audio stimulation andour experiments span over 1.5 months with multiple sessions.

C. Authentication solutions for head-mounted devices

Head-mounted devices of different types such as smartglasses, smart ear-buds, and virtual/augmented reality headsetsare increasingly becoming standalone devices [31]. Due tothe lack of user input modalities, several work looked intothe possibility of coming up with password-less authenticationmechanisms for head-mounted devices. Head movement pat-terns of individuals generated in response to an audio stimuluswas proposed to be used to authenticate them to smart glassesby Li et al. [32]. With accelerometer data from 30 users andsimple distance-based threshold classifiers, the authors showedEqual Error Rate of 4.43% can be achieved. Using the datacollected from 20 users, Rogers et al. [33] proposed blinksand head-movements captured using infrared, accelerometer,and gyroscope sensors can be used for user identification withan accuracy of 94%. Mustafa et al. [34] proposed security-sensitive Virtual Reality (VR) using head, hand and (or) bodymovement patterns exhibited by a user freely interacting witha VR. With 23 users the authors achieved a mean equalerror rates of 7%. George et al. [35] investigated the thirddimension for authentication in immersive virtual reality andin the real world. In the proposed system, users authenticateby selecting a series of 3D objects in a room using a pointerand investigated the influence of randomized user and objectpositions, in a series of user studies with 48 subjects.

In contrast to above work, this paper propose EEG as afeasible authentication modality for head-mounted devices andto the best of our understanding this is the first work thatdemonstrated the feasibility of using a commodity brainwaveheadset for user authentication.

VI. DISCUSSION & FUTURE WORK

We below discuss the implications of our results, potentialapplication scenarios, limitations, and potential future work.

Application scenarios. Our results showed that it is pos-sible to authenticate users with over 97% accuracy by cap-turing the brainwave patterns generated whilst a person islistening to music. The results are promising and the factthat we were able to do it using a commodity-headset alludethat brainwaves can be used as a potential entry point andcontinuous authentication solution for smart-headsets. Also,we showed that two electrodes are sufficient to achieve around95% accuracy. This is significant as most of the commodityhead-mounted devices such as smart glasses, ear-buds, and VRheadsets can easily support two front electrodes. We believethe innovative applications that are expected to rise along withthe IoT in education, remote support, tele-health, and manyother domains will demand for increased security mechanismsfor head-mounted devices. To this end, MusicID provides anintrusion-free and secure solution.

Participant pool. We used a dataset collected from 20voluntary participants spanning over 1.5 months. 55% of theparticipants provided data in over more than three sessions and45% provided data in only two sessions. While we achievedover 97% accuracy, it is necessary to explore the feasibilityof MusicID in a much larger participant pool that includesdiverse demographics. However, we highlight that majority ofcomparable studies (E.g. [23], [25], [26], [27], [32]) demon-strated their accuracies with less than 20 participants.

Contextual changes. Our results showed consistency overmultiple sessions spanning 1.5 months and we were able tore-identify users in new sessions based on previous session’sdata. Nonetheless further experiments on collecting data afterdifferent physical and mental activities, different times of theday can help to identify whether there are context-driven shortterm variations that are dependent on physical activity, healthconditions [36], stress levels, fatigue [37], and mood. Also,there can be long term variations in brainwave responses (e.g.due to aging [38]) and a more robust model must be tunedover time to capture such variations. For such settings learningmodels based on transfer learning can be leveraged.

Attacks. Brainwave patterns provide a unique biometric thatis more secure compared to other biometrics. For example,it is difficult to record brainwave patterns for replay attackscompared to other biometrics. For example, a person’s finger-print can be copied while the person is asleep, however thistype of recording is not applicable for brainwaves as brainactivities are completely different whilst sleeping. Similarly,an attacker can record a person’s voice from a distance and usevoice synthesis to bypass a voice based authentication system.However, brainwaves can’t be recorded at a distance and alsohas to be collected in the very specific moments where the useris engaged in a related activity such as listening to music inthis case. One possible attack is an attacker starting with someexisting data and trying to synthesize the target’s brainwavepattern, which appears to be highly difficult.

A. Future work

During our study we found another potential biometricmodality related to the electrical activities in eye and has thepotential to be used as a standalone modality or in combinationwith EEG. Due to the potential difference between the cornea

Page 8: MusicID: A Brainwave-based User Authentication System for

8

and retina of the eye, eye blinks evoke an artifact waveform inEEG known as EOG (Electrooculogram) which can potentiallybe unique to individuals. The usage of EOG as a biometricsystem has been tested in [39] using electrodes placed onthe face to capture the potential difference evoked due toeye blinks. During our study we found that Muse headsetis capable of capturing the EEG signals during eye blinks,which can then be used to extract EOG using signal processingtechniques such as Discrete Wavelet Transformation.

VII. CONCLUSION

We proposed MusicID a behavioral biometric modalitysuitable for smart-headsets enabled IoT environments inducedby the human brain’s response to music. Using a 4-electrodecommodity brainwave headset we collected brainwave datasamples from real users over multiple sessions while they werelistening two forms of music; i) a common English song, ii)individual’s favorite song. We built Random Forest classifiersand showed that an accuracy over 98% can be achieved foruser identification and an accuracy over 97% can be achievedfor user verification using both audio stimulations. Our featureimportance analysis showed that Alpha and Beta waves havemore predictive capabilities. By investigating the classifierperformance at individual electrodes and in combinations,we showed that use of two electrodes instead of four, dropsaccuracy only by 2%-5%. While further studies are requiredwith much larger set of users, the performance of MusicIDindicates the feasibility of a non-obtrusive, user friendly, andsecure solution to the problem of entry point and continuoususer authentication in smart-headsets that are expected tobecome increasingly popular with the proliferation of IoT.

REFERENCES

[1] “Gartner, Inc.” https://www.gartner.com/newsroom/id/3790965, 2017.[2] T. Matsumoto, H. Matsumoto, K. Yamada, and S. Hoshino, “Impact of

artificial gummy fingers on fingerprint systems,” in Optical Security andCounterfeit Deterrence Techniques IV, vol. 4677, 2002, pp. 275–290.

[3] F. Monrose and A. D. Rubin, “Keystroke dynamics as a biometric forauthentication,” Future Generation computer systems, vol. 16, no. 4, pp.351–359, 2000.

[4] M. Frank, R. Biedert, E. Ma, I. Martinovic, and D. Song, “Touchalytics:On the applicability of touchscreen input as a behavioral biometric forcontinuous authentication,” IEEE transactions on information forensicsand security, vol. 8, no. 1, pp. 136–148, 2013.

[5] D. Gafurov, K. Helkala, and T. Søndrol, “Biometric gait authenticationusing accelerometer sensor.” JCP, vol. 1, no. 7, pp. 51–59, 2006.

[6] S. Hadiyoso, A. Rizal, and S. Aulia, “Ecg based person authenticationusing empirical mode decomposition and discriminant analysis,” Journalof Physics: Conference Series, vol. 1367, p. 012014, 11 2019.

[7] K. Saeed, New directions in behavioral biometrics. CRC Press, 2016.[8] “Muse— meditation made easy,” http://www.choosemuse.com/, 2017.[9] “Neurosky Store,” https://store.neurosky.com/pages/mindwaves, 2017.

[10] P. M. Thompson, T. D. Cannon, K. L. Narr, T. Van Erp, V.-P. Poutanen,M. Huttunen, J. Lonnqvist, C.-G. Standertskjold-Nordenstam, J. Kaprio,M. Khaledy et al., “Genetic influences on brain structure,” Natureneuroscience, vol. 4, no. 12, p. 1253, 2001.

[11] H. Petsche, P. Richter, A. V. Stein, S. C. Etlinger, and O. Filz, “Eegcoherence and musical thinking,” Music Perception: An InterdisciplinaryJournal, vol. 11, no. 2, pp. 117–151, 1993.

[12] J. Bhattacharya and H. Petsche, “Musicians and the gamma band: Asecret affair?” NeuroReport, vol. 12, no. 2, pp. 371–374, 2001.

[13] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music andemotion: Electrophysiological correlates of the processing of pleasantand unpleasant music,” Psychophysiology, pp. 293–304, 2007.

[14] M. Iwasaki, C. Kellinghaus, A. Alexopoulos, R. Burgess, A. Kumar,Y. Han, H. Luders, and R. Leigh, “Effects of eyelid closure, blinks, andeye movements on the electroencephalogram,” Clinical Neurophysiol-ogy, vol. 116, no. 4, pp. 878–885, 4 2005.

[15] “Muse Monitor,” http://www.musemonitor.com, 2017.[16] J. Chauhan, S. Seneviratne, Y. Hu, A. Misra, A. Seneviratne, and

Y. Lee, “Breathing-based authentication on resource-constrained iotdevices using recurrent neural networks,” Computer, May 2018.

[17] J. Chauhan, J. Rajasegaran, S. Seneviratne, A. Misra, A. Seneviratne,and Y. Lee, “Performance characterization of deep learning models forbreathing-based authentication on resource-constrained devices,” Pro-ceedings of the ACM on Interactive, Mobile, Wearable and UbiquitousTechnologies, vol. 2, no. 4, pp. 1–24, 2018.

[18] A. Buriro, B. Crispo, and M. Conti, “Answerauth: A bimodal behavioralbiometric-based user authentication scheme for smartphones,” Journalof Information Security and Applications, vol. 44, pp. 89 – 103, 2019.

[19] M. Loulakis, G. Blatsios, C. Vrettou, and I. Kominis, “Quantum bio-metrics with retinal photon counting,” preprint arXiv:1704.04367, 2017.

[20] E. S. Finn, X. Shen, D. Scheinost, M. D. Rosenberg, J. Huang, M. M.Chun, X. Papademetris, and R. T. Constable, “Functional connectomefingerprinting: Identifying individuals using patterns of brain connectiv-ity,” Nature neuroscience, vol. 18, no. 11, pp. 1664–1671, 2015.

[21] W. Chiu, C. Su, C.-Y. Fan, C.-M. Chen, and K.-H. Yeh, “Authenticationwith what you see and remember in the internet of things,” Symmetry,vol. 10, no. 11, 2018.

[22] H. Huang, L. Hu, F. Xiao, A. Du, N. Ye, and F. He, “An eeg-based identity authentication system with audiovisual paradigm in iot,”Sensors, vol. 19, no. 7, 2019.

[23] J. Chuang, H. Nguyen, C. Wang, and B. Johnson, “I think, thereforeI am: Usability and security of authentication using brainwaves,” inFinancial Cryptography and Data Security, 2013, pp. 1–16.

[24] G. Bajwa and R. Dantu, “Neurokey: Towards a new paradigm of can-celable biometrics-based key generation using electroencephalograms,”Computers & Security, vol. 62, pp. 95–113, 2016.

[25] J. Sohankar, K. Sadeghi, A. Banerjee, and S. K. Gupta, “E-bias: Apervasive EEG-based identification and authentication system,” in Proc.of ACM Symposium on QoS and Security for Wireless and MobileNetworks, 2015, pp. 165–172.

[26] X. Zhang, L. Yao, S. S. Kanhere, Y. Liu, T. Gu, and K. Chen, “MindID:Person identification from brain waves through attention-based recurrentneural network,” arXiv preprint arXiv:1711.06149, 2017.

[27] I. Jayarathne, M. Cohen, and S. Amarakeerthi, “BrainID: Developmentof an EEG-based biometric authentication system,” in IEMCON, 2016.

[28] M. Zeynali and H. Seyedarabi, “Eeg-based single-channel authenticationsystems with optimum electrode placement for different mentalactivities,” Biomedical Journal, vol. 42, no. 4, pp. 261 – 267,2019. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2319417018302439

[29] B. Kaur, D. Singh, and P. P. Roy, “A novel framework of EEG-baseduser identification by analyzing music-listening behavior,” MultimediaTools and Applications, pp. 1–22, 2016.

[30] H. Huang, L. Hu, F. Xiao, A. Du, N. Ye, and F. He, “An eeg-based identity authentication system with audiovisual paradigm in iot,”Sensors, vol. 19, no. 7, p. 1664, 2019.

[31] S. Seneviratne, Y. Hu, T. Nguyen, G. Lan, S. Khalifa, K. Thilakarathna,M. Hassan, and A. Seneviratne, “A survey of wearable devices andchallenges,” IEEE Communications Surveys Tutorials, 2017.

[32] S. Li, A. Ashok, Y. Zhang, C. Xu, J. Lindqvist, and M. Gruteser, “Whosemove is it anyway? authenticating smart wearable devices using uniquehead movement patterns,” in PerCom’16. IEEE, 2016, pp. 1–9.

[33] C. Rogers, A. Witt, A. Solomon, and K. Venkatasubramanian, “Anapproach for user identification for head-mounted displays,” 09 2015.

[34] T. Mustafa, R. Matovu, A. Serwadda, and N. Muirhead, “Unsure howto authenticate on your vr headset?: Come on, use your head!” 03 2018.

[35] C. George, M. Khamis, D. Buschek, and H. Hussmann, “Investigatingthe third dimension for authentication in immersive virtual reality andin the real world,” in 2019 IEEE Conference on Virtual Reality and 3DUser Interfaces (VR). IEEE, 2019, pp. 277–285.

[36] S. Roizenblatt, H. Moldofsky, A. A. Benedito-Silva, and S. Tufik, “Alphasleep characteristics in fibromyalgia,” Arthritis & Rheumatology, 2001.

[37] A. Craig, Y. Tran, N. Wijesuriya, and H. Nguyen, “Regional brain waveactivity changes associated with fatigue,” Psychophysiology, 2012.

[38] E. C. Anyanwu, “Neurochemical changes in the aging process: Implica-tions in medication in the elderly,” The Scientific World Journal, 2007.

[39] M. S. Hossain, K. Huda, S. M. S. Rahman, and M. Ahmad, “Implemen-tation of an EOG based security system by analyzing eye movementpatterns,” in ICAEE’15, Dec 2015, pp. 149–152.