4
A S d s e w p th b e n s d n A T s A H I I M S o r b e a c t th c M d P c fc to p C C h Ton Non 1 Comput ABSTRACT Serious brain diseases often such disabilitie enable them to world around proposed tongu hese systems r body (e.g., to electrodes on c non-intrusive system using X differentiate be non-contact sen Author Keywo Tongue-compu signals; Access ACM Classific H.5.2. Informat Interfaces – inp NTRODUCTIO Motor neuron Sclerosis (ALS often cause se retain strong between the ag effects of these able person fr certain aspects echnologies. T hese individua control their en Many brain in diseases that le Permission to make d lassroom use is gran or profit or commeitation on the first p han ACM must be hor republish, to post permission and/or a fCHI 2015, April 18 - Copyright 2015 ACM http://dx.doi.org/10.1 ngue-in -Intrus Maya er Science & University Seattle, W injuries, spina lead to sever s can benefit fr o interact with them. While ue-based gestu require intrusiv ongue piercing chin). In this p and non-cont X-band Dopple etween 8 diff nsing, with an a ords uter interface; T sibility. ation Keyword tion interfaces put devices and ON n diseases su S), as well as s evere paralysis cognitive abil ges of 16 and e injuries and rom fully part s of their env Therefore, there als by helping nvironments. njuries, spinal ead to parapleg digital or hard copies nted without fee prov rcial advantage and page. Copyrights for onored. Abstracting w t on servers or to r ee. Request permissio 23 2015, Seoul, Rep M 978-1-4503-3145-6 145/2702123.270253 -Cheek ive and ank Goel 1 , C & Engineerin y of Washin WA 98195 U {mayank al injuries, and re paralysis. I rom interaction h the devices a number o ure detection s ve instrumentat g, dental reta paper, we pro tact facial ge er. The system ferent facial g average accura Tongue gesture ds and presentati d strategies. uch as Amyo serious brain o s. Many of th lities, and ha 30 years [1]. diseases impe ticipating and ironment, suc e is great value g them commu l injuries, and gia, do not how s of all or part of thi ided that copies are n that copies bear th components of this with credit is permitt edistribute to lists, ons from Permission ublic of Korea 6/15/04…$15.00 39 k: Using d Flexib Chen Zhao 2 , ng | DUB Gr ngton USA kg, chzhao, v d motor neuro Individuals wi n techniques th and thereby th f systems hav systems, most tion of the user ainers, multip pose a wireles esture detectio m can accurate gestures throug acy of 94.3%. es; Wireless on: User otrophic Later or spine injurie hese individua alf of them a The debilitatin ede an otherwi d engaging wi h as computin e in empowerin unicate with an d motor neuro wever affect th is work for personal not made or distribut his notice and the f work owned by othe ted. To copy otherwi requires prior speci n[email protected]. g Wirele ble Faci , Ruth Vinis roup vinisha, shwe on ith hat he ve of r’s ple ss, on ely gh ral es, als are ng ise ith ng ng nd on he cranial organs eyes an for bu many and no their u perceiv interfer of the varied magne and an face an as the the use applica recogn simple tongue Moreo other p such as intrusiv provid easily depend Figure detectio headph (bottom pattern. In this system facial g cheeks or ted full ers se, ific ess Sig ial Ges sha 2 , Shwet 2 Electrical Univ Seat etak} @uw.e l nerves [3]. s such as eyes nd tongue hav uilding access other facial or on-intrusive ge utility in chew ved exertion rence with the tongue-based levels of etic piercings i n array of eigh nd chin [4,7]. tongue piercin er’s tongue [1] ations such as nition. Howev er input and do e, sensing ca over, in many parts of their b s their jaws an vely track mo e additional fl switch betwee ding on their pr 1. (Left) Tongu on system. It ones. (Right) T m-left) and eleva . s paper, we p m that uses 10 G gestures in fou s caused by m gnals to tures D tak N. Patel l Engineering versity of Wa ttle, WA 981 edu These nerves , jaws, tongue ve been used ex sible interacti rgans also off estural interacti wing and voca and, unlike e user’s visual gesture recogn on-body inst in the tongue ht sEMG senso The more inv ng, offer very ] and can be u controlling wh ver, in cases oes not require an be signif cases, these body controlle nd cheeks [5]. A ovement of al lexibility for u en the differen references and ue-in-Cheek is a is integrated in The motion se ation (bottom-rig present Tongue GHz wireless s ur directions. It moving differe o Enabl Detectio l 1,2 g | DUB Gro ashington 195 USA s control vari e, and cheeks. xtensively by r ion systems er advantages ion. For examp alization, jaws the eye, pr activity. More nition approach trumentation; [1], dental re ors attached on vasive technolo fine-grained t used for many heelchairs or h where the u e fine-grained t ficantly less individuals ca ed by the cran A system that ll these body users. The user nt ways of ge d exertion over a non-contact fa nto a pair of nsor module a ght) of the ant e-in-Cheek (Fi signals to detec t detects the mo ent parts of t le on oup ious facial While the researchers [1,2,3,4,7], for robust mple, due to offer low roduce no eover, most hes require including etainers [3], n the user’s ogies, such tracking of y advanced handwriting user needs tracking of intrusive. an still use nial nerves, could non- parts can rs can then sture input time. acial gesture off-the-shelf and Azimuth tenna beam igure 1), a ct different ovement of the mouth: HMDs & Wearables to Overcome Disabilities CHI 2015, Crossings, Seoul, Korea 255

Tongue-in-Cheek: Using Wireless Signals to Enable Non ... › pdfs › tongue-in-cheek.pdf · Cheek for te video games. EEK type consists modules. The helf headphon e three sides

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Tongue-in-Cheek: Using Wireless Signals to Enable Non ... › pdfs › tongue-in-cheek.pdf · Cheek for te video games. EEK type consists modules. The helf headphon e three sides

ASdsewpthbensdn

ATs

AHI

IMSorbeactthc

Md

PcfocthopCCh

TonNon

1Comput

ABSTRACT Serious brain diseases often such disabilitieenable them toworld around proposed tonguhese systems r

body (e.g., toelectrodes on cnon-intrusive system using Xdifferentiate benon-contact sen

Author KeywoTongue-compusignals; Access

ACM ClassificH.5.2. InformatInterfaces – inp

NTRODUCTIOMotor neuronSclerosis (ALSoften cause seretain strong between the ageffects of theseable person frcertain aspectsechnologies. These individua

control their en

Many brain indiseases that le

Permission to make dlassroom use is granor profit or commeritation on the first phan ACM must be ho

or republish, to postpermission and/or a feCHI 2015, April 18 - Copyright 2015 ACMhttp://dx.doi.org/10.1

ngue-in-Intrus

Mayaer Science &

UniversitySeattle, W

injuries, spinalead to sever

s can benefit fro interact with

them. While ue-based gesturequire intrusivongue piercingchin). In this pand non-cont

X-band Doppleetween 8 diffnsing, with an a

ords uter interface; Tsibility.

ation Keywordtion interfaces put devices and

ON n diseases suS), as well as severe paralysiscognitive abil

ges of 16 and e injuries and rom fully parts of their envTherefore, thereals by helpingnvironments.

njuries, spinalead to parapleg

digital or hard copiesnted without fee provrcial advantage and

page. Copyrights for onored. Abstracting wt on servers or to ree. Request permissio23 2015, Seoul, Rep

M 978-1-4503-3145-6145/2702123.270253

-Cheekive and

ank Goel1, C& Engineeriny of Washin

WA 98195 U{mayank

al injuries, andre paralysis. I

from interactionh the devices

a number oure detection sve instrumentatg, dental retapaper, we protact facial geer. The systemferent facial gaverage accura

Tongue gesture

ds and presentati

d strategies.

uch as Amyoserious brain os. Many of thlities, and ha30 years [1]. diseases impeticipating andironment, suce is great value

g them commu

l injuries, andgia, do not how

s of all or part of thiided that copies are nthat copies bear th

components of this with credit is permittedistribute to lists, ons from Permissionublic of Korea

6/15/04…$15.00 39

k: Usingd FlexibChen Zhao2,ng | DUB Grngton USA kg, chzhao, v

d motor neuroIndividuals win techniques thand thereby thf systems hav

systems, most tion of the userainers, multippose a wirelesesture detectio

m can accurategestures througacy of 94.3%.

es; Wireless

on: User

otrophic Lateror spine injuriehese individua

alf of them aThe debilitatin

ede an otherwid engaging wih as computine in empowerinunicate with an

d motor neurowever affect th

is work for personal not made or distributhis notice and the fwork owned by othe

ted. To copy otherwirequires prior speci

[email protected].

g Wireleble Faci, Ruth Vinisroup

vinisha, shwe

on ith hat he ve of r’s ple ss, on ely gh

ral es, als are ng ise ith ng ng nd

on he

cranialorganseyes anfor bumany and notheir uperceivinterferof the varied magneand anface anas the the useapplicarecognsimpletongueMoreoother psuch asintrusivprovideasily depend

Figuredetectioheadph(bottompattern.

In thissystemfacial gcheeks

ortedfullersse,ific

ess Sigial Gessha2, Shwet

2ElectricalUnivSeat

etak} @uw.e

l nerves [3]. s such as eyesnd tongue havuilding accessother facial or

on-intrusive geutility in chewved exertion rence with thetongue-based

levels of etic piercings in array of eighnd chin [4,7]. tongue piercin

er’s tongue [1]ations such as nition. However input and doe, sensing caover, in many parts of their bs their jaws anvely track moe additional flswitch betwee

ding on their pr

1. (Left) Tonguon system. It ones. (Right) T

m-left) and eleva.

s paper, we pm that uses 10 Ggestures in fous caused by m

gnals totures D

tak N. Patell Engineeringversity of Wattle, WA 981edu

These nerves, jaws, tongue

ve been used exsible interactirgans also off

estural interactiwing and voca

and, unlike e user’s visual gesture recognon-body inst

in the tongue ht sEMG senso

The more invng, offer very ] and can be ucontrolling wh

ver, in cases oes not requirean be signifcases, these

body controllend cheeks [5]. Aovement of allexibility for uen the differenreferences and

ue-in-Cheek is ais integrated inThe motion seation (bottom-rig

present TongueGHz wireless s

ur directions. Itmoving differe

o EnablDetectiol1,2 g | DUB Groashington 195 USA

s control varie, and cheeks. xtensively by rion systems fer advantages ion. For examp

alization, jawsthe eye, pr

activity. Morenition approachtrumentation; [1], dental re

ors attached onvasive technolo

fine-grained tused for manyheelchairs or h

where the ue fine-grained tficantly less individuals caed by the cranA system that ll these body

users. The usernt ways of ge

d exertion over

a non-contact fanto a pair of nsor module aght) of the ant

e-in-Cheek (Fisignals to detect detects the moent parts of t

le on

oup

ious facial While the

researchers [1,2,3,4,7], for robust

mple, due to offer low

roduce no eover, most hes require

including etainers [3], n the user’s ogies, such tracking of y advanced handwriting user needs tracking of

intrusive. an still use nial nerves, could non-parts can

rs can then sture input time.

acial gesture

off-the-shelf and Azimuth tenna beam

igure 1), a ct different ovement of the mouth:

HMDs & Wearables to Overcome Disabilities CHI 2015, Crossings, Seoul, Korea

255

Page 2: Tongue-in-Cheek: Using Wireless Signals to Enable Non ... › pdfs › tongue-in-cheek.pdf · Cheek for te video games. EEK type consists modules. The helf headphon e three sides

toththasethphjbspwin

WinfasdtofdrrwthOhaeE

DTmaebca4soU

MTp

Fge

ouching of tonhe cheeks, andhree small, i

around a user’shifts caused bearlier systems,he user’s ski

placement. Tonheadphones andaws as the sa

between these tsystem adaptabparalysis. For with facial parndividual with

We evaluated ndividuals wit

felt Tongue-in-accuracy of Tostudy of 8 pardifferent gestuongue movem

findings show different gesturight, with anrequires minimwhether the ushem to perform

On an average,headset. This aadjusting one’effectiveness oEdgeWrite [6]

DESIGN OF TOThe Tongue-inmicrowave moattached to a paeach sensor cobottom (Figureconnected to acquisition uni48 kS/s and dsystem sampleoutput of the DUSB, where it w

Motion SensorThe microwavprototype are X

igure 2. (A-E) Oesture. The high

ngue against thd moving the jinexpensive 1s face (Figure by fine move, these sensors in and requirengue-in-Cheek d treats motion

ame so that ththree body par

ble to individuaexample, the ralysis can se

h tongue paraly

our system dth facial paraly-Cheek was peongue-in-Cheekrticipants. Theures perform

ment, cheek pufthat Tongue-in

ures in four din average accumal calibration fser has worn thm one gesture , participants tadjustment is ns headphonesof Tongue-in-and for playing

ONGUE-IN-CHn-Cheek protootion sensor air of off-the-s

overs one of the 1, Left). Th

a National t (DAQ). The

digitizes them s each of the DAQ was conwas processed

rs ve motion seX-band bistatic

Output signal frolighted part in E

he inside of thejaws. Tongue-

10 GHz Dopp1) and measu

ements of the do not need to

e no intricateeasily integrat

ns from the tonhe user can serts. This featureals with differsystem used b

eamlessly be uysis.

design and gesysis and 2 of thrfect suited to k was evaluatee system was ed through ffing, and jaw n-Cheek differeirections: up, uracy of 94.3from the user. he headset proin each of the

took 10.2 seconnot significantl. Lastly, we -Cheek for teg video games.

HEEK otype consists modules. The

shelf headphonhe three sides

he outputs of Instruments Uunit takes vowith 14-bit

three sensors nnected to a co

in MATLAB.

ensor modulesc Doppler tran

om the each of and F shows the

e cheeks, puffin-in-Cheek placpler radar unures the Doppl

cheeks. Unliko be in contact e calibration tes with existin

ngue, cheeks, aneamlessly switce also makes thent variationsby an individuused by anoth

sture set with he 3 participantheir needs. Thd in a controlltested for eigthree methodmovement. O

entiates betwedown, left, an

3%. The systeIt simply chec

operly by askine four directionnds to adjust thly different thaalso tested th

ext entry usin.

of three smaese sensors anes such that th: left, right, anthe sensors a

USB-6009 daltage samples resolution. That 16 kS/s. Th

omputer throug

s used in thnsceiver modul

three microwave difference betw

ng ces its ler ke of or ng nd ch he of

ual her

3 nts he ed

ght ds:

Our en nd em ks ng ns. he an he ng

all are hat nd are ata at he he gh

his les

(Paralloscillat1, Righpower microwmoduleshiftedto thesignal frequenshift.

Apart availabour pmicrowtempervariatioalso prare notand Rethat cadetect modulebeam pclose pof the user’s that thrunningthe useuser’s in suchinterfer

GesturTonguright, utap geswant tpositioinsteadAll theby toucor movpuffingto puffthese gcheeks

We aim

ve Doppler sensoween a tap and a

lax 32213). Thtor uses a pairht) to transmit

of 5 mW. Awave energy re. If the obje

d away from the Doppler effis mixed with ncy voltage r

from being ble, these senspurposes. Conwave region, thrature (withinon), acoustic nrevents any hat FCC approveegulations. Adan be placed in

subtle facial es are highly patterns from -proximity of thantenna in froncheeks and ch

he modules dg at same freqer to adjust the cheeks and theh a way that re with one an

res e-in-Cheek supup, and down astures, the userto perform theon. For hold gd of retracting e (both tap andching the tongving the lowerg of cheeks, thef the upper lip gestures resul

s.

med to make ou

ors for no-gestu hold gesture.

he module’s bur of transmittint a 10.5 GHz

Another pair oreflected by thect moves, thehe transmit frefect. This frethe transmitte

representing th

inexpensive sors offer somnsidering thathe sensing is in normal bnoise, light, etcarm to the humed but comply dditionally, ourn close proxim

movements directional an

-60° to 60°. If he face, the 60°nt of the sensohins. The syst

do not ‘jam” quency. It prese modules sucheir face occludthe side lobesother.

upports gestureand two modesr moves the boe gesture and gestures, the uback to the re

d hold) gesturesgue against ther jaw, or puffine up and downand lower lip

lt in a, somet

ur device flexi

ure, 4-directional

uilt-in dielectricng patch antenn

waveform wiof antenna re

he objects in fre received fre

equency of 10.equency-shifteded signal to obthe amount of

($4 USD) ame unique advat they operatimmune to the

bounds of tec. Their low ouman body. Thes

with FCC Parr system requirmity of the facThe antennas

nd have almothe sensors ar°conical radiators covers the mtem also needseach other as

ents a simple Uh that they are cdes the opposins of the modu

es in four direcs: tap and holdody part with w

then retract tuser “holds” tsting position s can be perfor

e inner side of ng the cheeks.

n gestures requiportion, respectimes subtle,

ible so that the

l tap gesture. (F

c resonator nas (Figure ith a peak

eceives the front of the equency is 5 GHz due d received tain a low-frequency

and easily antages for te in the e effects of emperature

utput power se modules rt 15 Rules res sensors ce and can s on these st uniform e placed in tion pattern majority of s to ensure s they are UI to guide close to the ng modules ules do not

ctions: left, d. In case of which they to the rest the gesture after a tap. rmed either the cheeks In case of

ire the user ctively. All motion of

user could

F) hold-down

HMDs & Wearables to Overcome Disabilities CHI 2015, Crossings, Seoul, Korea

256

Page 3: Tongue-in-Cheek: Using Wireless Signals to Enable Non ... › pdfs › tongue-in-cheek.pdf · Cheek for te video games. EEK type consists modules. The helf headphon e three sides

switch between different interaction modes. This was motivated by two observations: (1) cranial muscles are not trained for gestures and have a tendency to get tired after prolonged use, and (2) different individuals have different conditions. For example, in our evaluation one participant could not use their tongue for interaction but could easily puff the cheeks. This goal was aided by our choice of sensing approach. The motion sensors used in our prototype sense the direction of the motion. The system is only aware of the direction in which the gesture was performed and not the way the user performed it. This method helps the user to seamlessly switch between the gesture modes without the need to retrain or reconfigure the system. We carefully selected our gesture set to make sure that they are simple and modular. The gestures can be performed in all four directions and the differentiation of tap and hold means that the gestures can be combined to perform more complicated interactions. The video figure shows one such example for cursor control. The modularity of the gestures helps in overcoming the limitation of discrete gesture detection.

The placement of three sensors around the face enables a simple detection of motion in three directions: left, right, and down. A gesture in one direction changes the signal in the corresponding sensor significantly more than the rest. We do not have a sensor in the “up” direction, which adds some complexity. Ideally, the upward motion of the skin would generate a reverse Doppler shift detected by the bottom sensor. However, this simple model does not accommodate the complexity of facial muscle structure; when a user touches their upper lips with their tongue, the entire area around the mouth moves, and all three sensors experience deviations in their low-frequency components. The bottom sensor experiences the most deviation, but not necessarily in the opposite direction as in the case of the “down” gesture (Figure 2). In the both the “up” and “down” gestures, the skin moves out (i.e., away from the user’s body) more than it moves up. However, our empirical experiments show that the signal is different for each of the four gestures and can be separately classified using standard machine learning techniques.

Figure 2F shows an example of the observed low frequency signal for a hold-up gesture. This signal for bottom sensor is very different from the tap-up gesture shown in Figure 2E. In case of hold, the magnitude does not oscillate because the muscles do not retract to resting position immediately.

Figure 3. Tongue-in-Cheek’s algorithm

Algorithm The output of the microwave motion sensors is a baseband signal representing the amplitude of the Doppler shift. An analog to digital converter (ADC) first digitizes the output

from the motion sensors. Then a third order Butterworth Filter low-pass filters the data up to 15 Hz (Figure 3). After this we use two different processing pipelines to differentiate between tap and hold, and the four directions. The outputs from both the classifiers are combined in the end to form eight different gestures: tap-left, tap-right, tap-up, tap-down, hold-left, hold-right, hold-up, and hold-down.

Tap vs. Hold Classification Figure 2 shows that the signal looks substantially different for the tap and hold gestures. We use a support vector machine (SVM) classifier here and the features are calculated across 1 sec windows with a 0.8 sec overlap. The overlap of 0.8 sec assumes that the user would not perform two gestures within 0.2 sec. We first use mean filtering to downsample the low-pass filtered signal into 20 samples. These 20 data points from each of the three sensors gives us a total of 60 features for the SVM-based model. This particular processing pipeline does not do any segmentation. It simply calculates features across window. The second processing pipeline that classifies the gesture into one of the four directions performs the segmentation.

Direction Classification Figure 3 shows that while differentiation between left, right, and down is very clear, the case of down vs. up is not that straightforward. Therefore, we cannot use a rule-based system and use decision trees to classify between the four directions, instead. We also added a class to represent when no gestures is being performed. This helped us in robust segmentation of real-time data. We calculate 3 features for each gesture: minimum, maximum, and variance of each 1 second window. The variance is the most important feature in this classifier because it is clear from Figure 3 that the amount of change in signal is a big differentiating factor. The minimum and maximum features help in overcoming the noise in the data and provide dynamic thresholding to the decision trees. All the features are calculated over 1 second window with a 0.8 seconds overlap.

QUALITATIVE EVALUATION We evaluated our choice of sensing mechanism and gesture-set by demonstrating Tongue-in-cheek to three individuals with neuromuscular conditions. Two of the participants tried the system for almost 10 minutes each, and commented on the performance and the design. The third participant could not use the system personally and commented on the overall design and utility. P1 could not open his jaw far enough to instrument the inside of the mouth easily. Another side effect of his condition was tongue fasciculation – minor but constant involuntary muscle fluctuations across the tongue. These fluctuations were large enough to make tongue tracking hard, but the cheeks were relatively stable. Therefore, Tongue-in-Cheek was a relatively better approach for him as compared to [1,3]. P2 preferred switching between different modes and said, “the fact that you support several avenues of interaction is great.” P3 actually preferred using a magnet attached to her tongue for more fine-grain control, but saw the appeal for people not wanting to place a magnet

HMDs & Wearables to Overcome Disabilities CHI 2015, Crossings, Seoul, Korea

257

Page 4: Tongue-in-Cheek: Using Wireless Signals to Enable Non ... › pdfs › tongue-in-cheek.pdf · Cheek for te video games. EEK type consists modules. The helf headphon e three sides

in their mouth [1]. Tongue-in-Cheek could be useful for a subset of the population due to its flexibility and non-intrusiveness.

TECHNICAL EVALUATION & RESULTS To evaluate the gesture detection accuracy, the users had to perform repeated sets of gestures and our participants from qualitative evaluation had limited availability; hence we recruited eight participants (3 females) who did not have any neuromuscular condition. They all used the same pair of modified headphones. The initial adjustments were not very different from the kind a user makes when using a pair of headphones, i.e., adjusting the size of the headband and adjusting the angle and reach of the microphone.

For each gesture class, participants had to perform 20 gestures. The order of the gesture classes was random. The participants were told that they could perform the gestures by either using their tongue, moving their lower jaw, or by puffing their cheeks.

The system performs leave-one-out cross validation and the system never uses the data for the same participant in training as well as testing. The models are therefore global and do not need to be adjusted for each user.

The system was able to correctly predict the direction of the gesture with 94.3% accuracy. The confusion was greatest between the up and down gestures. This was expected since both of these gestures depend heavily on the variation in output of the sensor placed under the user’s chin. The accuracy for differentiating between tap and hold was 97.4%, and the segmentation accuracy for the system was 93.6%.

Tongue-in-Cheek is agnostic to the source of movements, i.e., tongue, cheek, or jaw, and we did not formally evaluate how the system performed for these different movements. The participants were told that they could use the three motions independently.

APPLICATIONS We applied Tongue-in-Cheek-based input to two different applications. We recruited three of our original eight participants to evaluate the applications (See video figure).

EdgeWrite Inputting text on computing devices is an important capability. We integrated EdgeWrite [6] into our system to test if Tongue-in-Cheek could be used for text. Each participant was asked to enter three simple English language phrases. All three participants were able to correctly enter the three phrases with an average text entry rate of 9 wpm.

Video Games In order to test the responsiveness of our gesture detection system, we asked the participants to play two different games: Pac-Man and Contra. All participants were able to play the games and expressed satisfaction with the system’s responsiveness.

These are just two example applications for Tongue-in-Cheek. Our evaluation has shown that the system can reliably detect gestures in four directions and can be useful for many other general-purpose applications.

DISCUSSION AND LIMITATIONS Systems like Tongue-in-Cheek that are designed for users with neuromuscular conditions face an important challenge. In most cases, such systems require the user to change their lifestyle and go through training. Hence the users should be able to use the device properly and actually enjoy using it. Tongue-in-Cheek detects modular, and directional gestures with no contact with the body. However, our interviews highlighted that different individuals have different needs and preferences. For example, Tongue-in-cheek is less intrusive, but it is more visible than [1,3].

Near infrared (NIR) sensors can also be used for proximity and motion sensing. Apart from being susceptible to dust and light, NIR is less immune to physical properties of the reflecting surface, hence might work differently for bearded or clean-shaven user. On the flip side, RF is more prone to ambient motion and headset movement. While this limitation can be largely countered with an Inertial Measurement Unit, we believe that our algorithms will work similarly for both the sensors and the final decision on sensors will depend on the user’s preference.

CONCLUSION Motor neuron diseases such as ALS as well as serious brain or spine injuries often result in severe paralysis. Patients can benefit from systems that allow them to control their environment non-intrusively. We present a facial gesture detection system that robustly detects various facial gestures in four directions. These gestures can be performed using the user’s tongue, cheeks, or jaws. This capability allows the user to seamlessly switch between different gesture modalities and potentially feel less exerted.

REFERENCES 1. Huo, X., Wang, J., and Ghovanloo, M. A Magneto-Inductive

Sensor Based Wireless Tongue-Computer Interface. Neural Systems and Rehabilitation Engineering 16, 5 (2008).

2. Lukaszewicz, K. The Ultrasound Image of the Tongue Surface as Input for Man/Machine Interface. Proc. INTERACT ’03.

3. Saponas, T.S., Kelly, D., Parviz, B. a., and Tan, D.S. Optically sensing tongue gestures for computer input. Proc. UIST’09.

4. Sasaki, M., Arakawa, T., Nakayama, A., Obinata, G., and Yamaguchi, M. Estimation of tongue movement based on suprahyoid muscle activity. Proc. IEEE Engineering in Medicine and Biology Society. 2013.

5. Wilson-Pauwels, L., Akesson, E.J., and Stewart, P.A. Cranial Nerves: Anatomy and Clinical Comments.1988.

6. Wobbrock, J.O., Myers, B.A., and Kembel, J.A. EdgeWrite: A Stylus-Based Text Entry Method Designed for High Accuracy and Stability of Motion. Proc. UIST 2003.

7. Zhang, Q., Gollakota, S., Taskar, B., and Rao, R.P.N. Non-Intrusive Tongue Machine Interface. Proc. CHI'14.

HMDs & Wearables to Overcome Disabilities CHI 2015, Crossings, Seoul, Korea

258