21
C ALIFORNIA S TATE U NIVERSITY S ACRAMENTO S ENIOR DESIGN Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao, Jason Vossoughi March 16, 2016

Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

CALIFORNIA STATE UNIVERSITY SACRAMENTO

SENIOR DESIGN

Sonic-Vibe Design Documentation

Eric Carmi, Yakira Johnson, Ou Saejao, Jason Vossoughi

March 16, 2016

Page 2: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

1

Sonic-Vibe Design DocumentationEric Carmi, Yakira Johnson, Ou Saejao, and Jason Vossoughi

Abstract—The Sonic Vibe is a device to detect the direc-tion of sound. This paper is a discussion of the problemstatement, the design idea, the work breakdown structure,the associated risks, and the design documentation of thisdevice. The design documentation includes a user manualand a description of the prototype as well as any schematicsand diagrams associated with this project.

Index Terms—Wearable Device, Hearing Impairment

I. INTRODUCTION

The Sonic-Vibe aims to solve the societal issue ofhearing impairment and the dangers it can cause tothose that experience it. While other solutions havebeen offered in the past, the Sonic-Vibe takes a uniqueapproach to this problem with its design idea. The Sonic-Vibe consists of three main parts-the headband, thewaistband, and the User Interface-all of which have beenfurther broken down and distributed equally among themembers of this team. The result is a device that detectsthe direction of sound and translates it into vibrationsaround the waist to notify a deaf or hearing impairedperson of anything in their surroundings that they cannothear themselves.

II. PROBLEM STATEMENT

Hearing impairment and deafness are prevalent prob-lems in our society. Hearing loss in general is the second-most common health issue and more people suffer withit than with Alzheimer’s, epilepsy, diabetes, and Parkin-son’s combined [6]. Since 2000, the number of peoplesuffering from hearing loss has doubled in the UnitedStates and increased 44 percent on a global scale [6]. Inaddition to natural causes, there are many environmentalcauses of hearing loss such as loud noise and even somemedication. Another common cause of hearing impair-ment, even though temporary, is wearing headphones.No matter the cause, being hearing impaired can renderone unaware of their surroundings and therefore increasethe odds of potential danger.

A. Noise-Induced Hearing Loss

26 million Americans between the ages of 20 and 69are hearing impaired due to exposure to loud noise [8].Table I shows the decibel levels of common sounds.

Fig. 1. Workers Reporting Hearing Trouble by Industry in 2010

Frequent prolonged exposure to noises higher than 85decibels can cause gradual hearing loss [1]. This equatesto the sound of a busy road or a diesel truck in closeproximity. Millions of Americans are continually ex-posed to noises much louder than that while at workor even while listening to music or watching TV. Infact, several professions require workers to operate loudequipment and Figure 1 shows that as a result, over50 percent of these workers experience noise-inducedhearing loss. Those with noise-induced hearing loss oftenlose the ability to hear high frequency sounds [1].

TABLE ISOUND SOURCES AND THEIR RESPECTIVE SOUND LEVELS

Sound Source (distance) Level (dB)Firecracker 180

Gunshot 167Car Stereo 154Kids Toy 150

Jet Aircraft (50m) 140Pain Threshold 130Sports Events 127Rock Concert 120

Chainsaw 110Movie 94

Diesel Truck (10m) 90Busy Road (curbside) 80

Vacuum (1m) 70Conversational Speech 60

Average Home 50Quiet Library 40

Quiet Bedroom (night) 30Background in TV Studio 20

Rustling Leaves (20m) 10Hearing Threshold 0

Page 3: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

2

Fig. 2. Firefighter Cause of Death, 2014

B. Occupational Hearing Impairment

As mentioned above, workers in many professions areregularly exposed to loud equipment and other noisethat can cause hearing impairment. Oftentimes, this is anunavoidable part of the job so employers have to look forother solutions to avoid permanent hearing impairmentfor their employees. Usually, this is through the use ofear protection such as ear plugs, ear muffs, and otherprotective gear. While using equipment to protect hearingmay have some advantages, it also has some majordisadvantages. For example, construction workers amongothers are constantly surrounded by potential dangersuch as moving vehicles and parts that they cannot hearwhile wearing such ear protection. Firefighters also findthemselves in a similar situation. Figure 2 shows thatin 2014, a significant amount of firefighter deaths werecaused by falling, structural collapse, or being struck.These deaths could arguably have been avoided if thefirefighters were able to hear the oncoming danger andwhere it was coming from, however, the combination ofwearing helmets and masks and being surrounded by thenoise of fire makes it difficult to hear the sound of fallingdebris. Soldiers among workers in other professions alsoexperience similar issues.

C. Medicine-Induced Hearing Loss

There are a variety of medicines that can cause hearingloss. Some more commonly consume ototoxic drugsinclude aspirin, in high doses, and non-steroidal anti-inflammatory drugs (NSAIDS) such as ibuprofen. Thereare even some antibiotics and cancer-treating drugs thatcan affect hearing as well. Medicine-induced hearingloss occurs quickly and once detected, the only courseof action is to stop the drug, if possible, since thereis no cure for hearing loss. The resulting drug-inducedototoxicity is usually temporary, but it can also bepermanent. Like with noise-induced hearing loss, people

Fig. 3. Death Reports Involving Headphones, 2004-2011

with drug-induced hearing loss most commonly lose theability to hear high frequencies [2].

D. Pedestrians and Headphones

Headphone use is becoming more and more popularin today’s society. This can be especially problematicwhen it comes to pedestrians and cyclists. Headphonetechnology has even improved to include features suchas noise cancellation. This type of technology essentiallyrenders the wearer deaf and, as a result, much less awareof their surroundings. In fact, pedestrians wearing head-phones in big cities has resulted in an increasing numberof pedestrian-versus-car accidents [9]. These accidentsare more than likely due to the fact that the user waslistening to music and was unaware of what was goingon around them. According to a study conducted by theUniversity Of Maryland School Of Medicine and theUniversity Of Maryland Medical Center in Baltimore,headphone related pedestrian injuries have more thantripled in the last six years [9]. Figure 3 shows the 2004to 2011 statistics of 116 headphone related pedestrianaccidents reviewed during this study. At least 29 percentof those could have been avoided if the victim was ableto hear the warning sounded before the crash. With theincreasing trend of pedestrians wearing headphones, thenumber of victims has increased and is likely to continueto do so.

E. Dangers of Hearing Loss

High frequencies are often the first to go in peoplewhose hearing has become impaired and they generallyhave issues hearing frequencies between 2000 and 4000

Page 4: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

3

Hertz [1]. Airplanes and telephones fall in this frequencyrange as do most alarms and sirens [7]. As a result,there have been several deaf and hearing impaired peoplethat have died in house fires because the smoke alarmswere unable to wake them. There have also been anumber of deaf people that have walked on the traintracks under the assumption that the vibrations wouldwarn them of an oncoming train. However, the vibrationswent unnoticed and they were struck and killed. Withthe increasing number of distracted drivers on the roadtoday, it is also becoming less and less safe for thosewho cannot hear approaching cars to cross the street.It is clear that hearing impairment opens one up to awide variety of dangers. Unfortunately, deaf and hearingimpaired people can also find themselves targeted forrobberies and other attacks since they are unable tohear someone approaching from behind. Being deaf orhearing impaired can subject people to danger that couldhave been avoided if they were able to hear normally.

F. Existing and Previous Solutions

While technology exists that can improve hearing, lessthan 30 percent of people over 70 that could benefit fromusing hearing aids have ever used them [8]. That numberdrops to 16 percent for those aged 20 to 69 [8]. Evenwith the use of hearing aids, people with noise-inducedhearing impairment would never make a full recovery. Awearable technology that alerts the deaf and the hearingimpaired of loud sounds and the direction from whichthey are coming would benefit them greatly and help toavoid all of the aforementioned dangers that they mayface without it.

There have been previous devices that researchershave designed for the purpose of aiding hearing impairedpeople in their daily lives. The devices typically consistof microphones for detecting environment acoustics, atactile sensor for translating an acoustic signal to a sen-sation on the body, and other electronics for power andprocessing the signal. These devices include AutomaticDanger Detection, an Electrotactile Sensor, and an AlertSystem for the Deaf-Blind.

Uvacek and Moschytz presented a system they calledTIPS Tactile Identification of Pre-classified Signals [12].They discussed algorithms for determining characteris-tics of common signals such as a doorbell, car alarm,fire alarm, and speech for the purpose of translating thisinformation into tactile stimulation of the hand, whichcan be seen in Figure 4. They mentioned a few problemsencountered in the implementation of their wearabledevice, all of which were the result of using an electrodefor the tactile stimulation.

Fig. 4. TIPS Device

Frank A. Saunders published an article in which hedescribes a device used to localize sound [10]. Thedevice consists of a microphone on each side of the usershead along with electrodes which transmit electricalpulses to the head according to the direction, intensity,and frequency of sound in the environment. Saundersdevice can be seen in Figure 5. He reported good resultsfrom two people who wore the device for two days,however, the device is clearly very cumbersome for thewearer.

Lastly, Damper and Evans developed something simi-lar to the TIPS system [11]. They defined certain signalsof interest (doorbell, telephone, smoke alarm) and distin-guished their auditory characteristics in order to alert adeaf man of signal occurrence. Their model is a slightlyimproved version of the TIPS device because they usesmall motors for alerting the users. The motor usedfor tactile sensation is designed with an out of balanceweight on the motors shaft, which makes the sensation offeeling stronger for the user. Damper and Evans claimedto have installed and tested this system in the home ofa deaf man, and other than occasional problems withkitchen sounds being mistaken for the doorbell, the deafman said that the system worked splendidly.

G. A New Solution

Our goal is to offer an affordable solution to theaforementioned problems that hearing impairment cancause. We will improve upon previous solutions whiletaking a slightly different approach. Our device will bean example of a modern wearable technology. It willconsist of a band that stretches across the back of thehead with an array of small microphones. The headbandwill be connected to a waistband that has an array ofvibrating motor discs, which will alert the wearer to anincreased level of sound and the direction from whichit is coming. Manual and automatic calibration will becontrollable via Bluetooth communication from a phone.A rough sketch of the device is shown in Figure 6.

Page 5: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

4

Fig. 5. Saunders Electrotactile Sensor

Fig. 6. Sonic-Vibe Sketch

III. DESIGN IDEA CONTRACT

The Sonic-Vibe is a device that detects the direction ofsound. The proposed design needs to provide a solutionto the lack of awareness of ones surroundings thatis created by natural hearing impairment or wearingheadphones as a pedestrian. This design is required tocapture sound and emit a tactile sensation that alerts theuser of the sound and the direction it is coming from.This solution needs to be portable, reliable, and capableof being used safely and easily. There is a wide varietyof design constraints that come with designing any typeof project, but there is a lot more to consider whendesigning a device that is wearable. All constraints havebeen carefully considered and addressed in the designfor the Sonic-Vibe.

A. Design Constraints

There are many requirements that will limit the per-formance of this device. For example, the device needsto be small in size so as to remain unobtrusive to its user.It must also be lightweight and portable. These features,especially the portability, will limit the amount of powerthat can be supplied to the device. Power is importantbecause this device needs to be reliable to the hearingimpaired user and if it were to lose battery quickly, itwould put the user in danger. A microcontroller will haveto be chosen that can manage multiple components at thesame time without draining too much power. Anotherperformance constraint would be possible interferencebetween the microphones and the rest of the electronics.This can be addressed by separating the two and puttingthe microphone on the headband and the rest of theelectronics around the waist.

As designers, we are limited to keeping the project inthe realm of what we can afford to implement ourselvesor what we can get funding to implement from a sponsor.We have not yet looked for a sponsor, however, we havecreated a budget to keep the cost of our design around$200 so that our finished product may be affordable tothe end user. What we lack in expensive, better parts,we will make up for in putting in the time it takes totransform cheaper parts into an attractive, reliable, andworking product that is also affordable.

Since this Senior Design class ends in May of 2016,we are left with less than 8 months to produce a deploy-able prototype of our product. In addition to this, each ofus are taking other courses and/or working, which limitsthe amount of time per week that can be devoted to thisproject. To manage this time constraint, we have brokenout the project into 4 parts that each individual designeris responsible for. We have also agreed to dedicate amanageable 7 to 10 hours per week to this project andwe adhere to a weekly schedule that is created at thebeginning of each week.

The scope of our project includes 4 componentsaheadband with microphones to detect sound from eachdirection, a waistband with buzzers corresponding to thedirection of oncoming sound, a phone app to manage thesettings of the headband and waistband, and the ArtificialNeural Network. The cost, time, and scope constraintsdirectly affect each other, so altering any aspect of thescope would in turn alter the cost of our project and thetime it would take to complete it.

In addition to the aforementioned constraints, thereare also constraints associated with the environment andsafety. The Sonic-Vibe is a wearable device that willbe comprised of a variety of electronics. While the

Page 6: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

5

waistband can be covered with clothes, the headbandis left vulnerable to the outside environment. There area variety of weather conditions such as rain, fog, orhumidity that could cause damage to the device or harmto its user. Something else to consider in the design ofthis device is the possibility of the user sweating. If thesweat were to come into contact with the device, it couldcause a shock that could harm the user. Because of this,the device will have to be designed accordingly and morethan likely weatherproofed.

B. Design Idea

There is always more than one way to design an en-gineering solution to a problem. Some problems may besolved with a quick and easy fix and others need intricatesolutions. With regard to the problem of natural hearingimpairment and pedestrians wearing headphones, thereare a variety of solutions for each component. Broadlyspeaking, the system must be capable of capturing soundand emitting a tactile sensation to signify where thesound is coming from. It must also be able to calibrateto the current environment or to the user’s preferredthreshold.

1) Sound Capture: Capturing sound must inevitablybe done with the use of microphones. Different types ofmicrophones are available, but they must be relativelysmall in order to remain unobtrusive to the wearer.Electret microphones are small and cheap, but theirquality will be lacking compared to more expensivemicrophones. More expensive microphones may worktoo, but they must not require too much power to operate.

A stereo microphone setup (two mics) would requirethe use of very accurate signal processing techniques inorder to determine the direction of a sound. The humanauditory system uses the difference in time of arrival ofa sound to each ear in order to distinguish where a soundcomes from. Accomplishing this on a digital system hasstill not been perfected, so this route would be difficult.

A microphone array would not require determiningof phase differences between microphones; each micro-phone in the array could be fit with a cone to isolatelateral sounds, which means that only the intensity wouldbe of interest.

2) Tactile Sensors: Tactile sensors are electronic com-ponents that translate into a sense of touch. One methodof tactile stimulation of the body is with electrodes. Asmall voltage applied across the electrode will be felt bysomeone wearing the electrode. This can cause problemsif the wearer sweats, which can cause a voltage to formacross the electrode, resulting in a small shock.

Another option for a tactile sensor is a small motorwith an intentionally loosened shaft. Using this type of a

motor causes a stronger vibration than if the motor wasdesigned for its conventional uses. This has an advantageover electrodes because the sensory stimulation is donemechanically instead of electrically.

3) Processor: A CPU or a micro-controller will benecessary to interface the method of sound capture andtactile stimulation. A CPU such as the Raspberry Piwould be more than sufficient to accomplish all thenecessary tasks but it would be too slow to start up;a micro-controller would be a better option. A varietyof micro-controllers are available: Arduino, Propeller,etc. The micro-controller of choice will have to meetthe requirements of capturing sound and translating itto tactile stimulation in a small amount of time. TheArduino is not equipped to do parallel processing ina convenient way, whereas the Propeller has 8 coresavailable. A multiple core processor may be necessaryfor the lag time between audio capture and sensorystimulation to be negligible.

4) Controlling Device Settings: The user of our wear-able device must be able to initiate a calibration mode.An automatic calibration mode would consist of analgorithm that produces a sound intensity threshold levelbased on the wearer’s current environment. A manualcalibration mode would consist of the wearer setting thedecibel minimum that would trigger a sensory stimula-tion. Ideally, both automatic and manual calibration canbe made as available features of the device.

One option for controlling the device’s settings isto include buttons and switches on the belt buckle.Another option is to have a Bluetooth module that cancommunicate with a phone with a phone applicationthat sends control signals to the micro-controller. Havingcontrols on the belt buckle would be an easier solutionthan a phone application, but it may also take awayfrom a potentially appealing design. A phone applicationwould also enable the device’s features to be expandedthrough software updates only. For example, settingthe intensity threshold manually could be done in agraphically interactive way, or a map could be createdon the user’s phone corresponding to the current soundenvironment. Ultimately though, vanity should not beplaced ahead of functionality.

IV. RESOURCES

Resources that our team will need will range from labequipment to a tailor to create the waist and headband.We also need input from people who experience oursocietal problem on a daily basis. This can includeanyone that has suffered from a hearing disorder tostudents that love to blast music while walking around.Lab equipment will be utilized to trouble shoot and

Page 7: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

6

initialize hardware components to our project. We willalso need software to be able to compile and write ourcode for the processor.

A. Punch List

Our device involves three main features. Among thesefeatures are the headband, the waistband, and the UserInterface.

1) Headband: The headband will need to be the mostfashionable feature. It is the only feature that will bevisible and not covered up by a T-shirt. The headbandwill need to feature an array of microphones that willeither be at the same height or offset. Figure 7 shows themicrophones we decided to use for our prototype. Theheadband needs to be made of a comfortable materialthat will not bother or irritate the user. The materialof the headband will need to have an elastic propertyto ensure that is stays on the users head. It will alsoneed to be adjustable or offered at various sizes. Thiswill ensure that the product can accommodate users withdifferent size and shaped heads. The headband mustinclude multiples microphone that are located internally.The properties of the microphones need to be lightweightand be able to be sewed within the headband withcomfort in mind.

We will need to implement two to five small micro-phones in the sleek headband. They will all need tobe identical microphones. Having identical microphoneswill ensure that the sensitivity levels of each one willbe the same. The microphones essentially need to besmall in size and extremely light weight. Lightweight andsmall microphones will allow the headband to remaincomfortable. The microphones will need to be able toconnect to the microcontroller through an audio jack.This will require some type of connection between themultiple microphones and the microcontroller. Techni-cally speaking the microphone must be less than a halfan ounce in weight. The diameter of the microphoneshould be around or less than twenty millimeters. AnArtificial Neural Network will be needed to implementmachine hearing, or the ability for the microcontrollerto be able to identify the source of the sounds that themicrophone picks up.

2) Waistband: The waistband will be located aroundthe waist near the belly button of the user. Similar to theheadband, it will need to be comfortable and lightweight.The processor and battery will be located on some typeof buckle, which will be connected to the waistband.The headband and waistband will need to be in constantcommunication via wire. The power supply, which willconsist of a lightweight battery, must be able to fit insidethe buckle.

Fig. 7. Microphone

The buckle will connect the straps of the waistbandand include the micro-controller and power supply. Thepower supply will just be a DC battery big enough toprovide power to the microcontroller, microphones, andvibrators. The plan is to 3D print the buckle. The buckleneeds to be big enough to house the microcontroller andbattery and contain enough room for ports to connect themicrophones and vibrators. It should be made out of alightweight plastic that is also sturdy.

We plan to use a microcontroller that is small andcapable of processing the information from the all ofthe devices. A good option for a microcontroller that thewhole group is familiar with is the Propeller chip. Thechip has 40 pins which will be plenty for our device.We would like a cog for every different device. Thiswill make one cog devoted to sending out a signal toall the vibrators. Then, another cog will be used toread the signal coming in from the microphone. Onemore cog will be used to continue to receive and sendinformation through Bluetooth to the phone app. Usinga parallel processing chip with multiple cogs will allowextremely low processing time and virtually no lag whilethe multiple features are communicating with each other.Another option is to use a small version of an Arduino orsimilar controller that would process everything together.The microcontroller will need to be able to have portsconnected to it. These ports will include connectors toplug in the vibrators and microphones. The processorwill also need to be able to connect to Bluetooth whichwill allow the device to receive controls from the user.We plan to use ten Vibrating mini motor discs. Thevibrating mini motor discs must be smaller than a dime,about 10 mm. There voltage can vary between 3-5 voltswith a current draw anywhere from 40-100mA. Theywill be powered and controlled by the micro-controller.

Page 8: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

7

Fig. 8. Vibrating Mini Motor Disc

3) Phone App: Our device will need to be connectedto an app within smart phones. The app must allow theuser to be able to adjust different variables and sensitivitylevels. The app needs to be available to most smartphones. It will also need to be easy for any end userto use.

The phone app will require a Bluetooth connection totransmit and receive information. A Bluetooth device forthe microcontroller will also be needed that is compatiblewith the processor we decide to use. Another featurethat everything must comply to is being lightweight andsmall. Being small will be one of the most importantaspects of the Bluetooth device we use.

V. WORK BREAKDOWN STRUCTURE

The aforementioned features of our projectthe head-band, the waistband, and the user interfacehave been bro-ken down further into individual tasks. These individualtasks were then assigned to a team member.

A. Headband

The headband portion of the wearable is primarilyresponsible for capturing sound from the environment.It is essential for the headband to be comfortable andeasy to take on or off. The headband has a numberof microphones for detecting sound from the wearer’ssurroundings. On the software side, certain sounds needto be classified and saved as reference signals to comparewith incoming signals. Also, with a difference in positionof the microphones, it is possible to determine thedirection a sound comes.

B. Microphone Selection and Placement

In order for the wearable to be comfortable andappealing, the microphones should not be too large. De-pending on the algorithms used to translate the directionof a sound to tactile sensation, the microphones couldbe either uni-directional or omni-directional. With uni-directional microphones, the intensity of sound could bethe variable that determines if a vibration will occur onthe waist. Regardless of the type of microphone used,another feature of the device is classifying commonlyheard sounds like cars, trains and sirens. The classifica-tion process involves determining characteristics of thesesounds and a way to compare any given sound in theenvironment to the predetermined sounds stored on thewearable device.

C. Determining Sound Direction

With at least two microphones on the headband, it ispossible to perform a mathematical operation to deter-mine the phase difference between two similar signals.

θ = cos−1 x · y‖x‖ ‖y‖

(1)

This phase difference is significant because it is afunction of the angle that a sound source makes withthe microphones. In equation 1, θ is an angle thatresults from a formula relating the inner product of twovectors with the cosine of the angle between them, butit does not necessarily have a physical meaning likethe angles in figure 10. The picture represents the twodimensional angle formed between the sound sourceand two microphones. In this particular configuration,the right angles that are formed allow us to determinehow the angles φL and φR change when the point P istranslated vertically.

φL = φR = tan−1(dS

dM/2)

In the limit as dS , goes to infinity, both angles convergeto 90 degrees because the distance between the micro-phones, dM , does not change.

However, when dS does not change, but rather movesaround the arc shown, the angles become unequal. Thedifference between the angles should be zero when thesound source is positioned as in figure 10. If the soundsource is directly to the left or directly to the right, thedifference between the angles will have a magnitude of180 degrees and a minus sign depending on which angleis subtracted from the other.

This geometric diagram represents how angles formedbetween microphones and a sound source is a result of

Page 9: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

8

the distance from the sound source to each microphone.When the distances to each microphone are identicalthere is no phase difference between the physical pic-tures. Also, left and right sections along a circle tracedabout the center of the microphones should result in aphase difference of equal magnitude and opposite sign.Both of these characteristics should be consistent withthe angle θ from equation 1. The angle θ does not needto be equal to the difference of the angles made in thephysical picture, but there does need to be a consistentcorrespondence between the two.

If this correspondence exists, it allows us to measurethe values of θ corresponding to the phase difference,∆φ, which is the difference between φL and φR. A listcan be created for values of theta that correspond tocertain locations of a sound source. When a signal is readfrom by the microphones, the phase difference betweenthe signals in each microphone can be checked with thestored list and output the angle of incidence of the soundsource.

D. Making Measurements

In order to make a table of values that can becompared to an incoming sound from the environment,it is necessary to make precise measurements of howthe phase difference between similar signals changes assound changes its location.

This can be accomplished by a setup like that shown infigure 9. This setup allows us to determine the measuredphase difference resulting from equation 1 and map arange of values to the relative location, which is onlyknown during the measurement process. If consistentmeasurements are taken, this method determines the lo-cation of a sound in a two dimensional plane by creatinga one to one correspondence between the phase delaycalculated with equation 1 and the difference in anglesmeasured between the source and two microphones.

E. Belt Buckle

The belt buckle will be need to house most of theelectronics. The micro-controller is one of the mainfeatures located inside the belt buckle. In order for itto be able to control various functions of the belt buckleit will need to be connected to blue tooth, microphones,headphones, and a battery supply. The belt buckle willbe designed in a 3D CAD modeling program. After it isdesigned, it will then be 3D printed.

Figure 11 shows the design of the buckle made usingSolidWorks. By 3D printing the belt buckle there willbe significant cost savings as well allow for total cus-tomization of the device. Before building the belt buckle

x

y

z

Mics

S2

S1

S3

Fig. 9. Geometry for Measuring Microphone Phase Delay

dS

dMLeft Right

P

φRφL

Fig. 10. Microphone Angles

Fig. 11. CAD Design for Buckle

Page 10: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

9

it is important to know the exact components that will begoing inside of the device. We need to build the modelaround the parts, considering how we plan to mount andsecure everything.

F. Micro-controller

One important aspect of the micro-controller is itsability to communicate with every part of the project.It will also need to be programmed and have its ownsoftware. The programming along with the software ofthe micro-controller will be the biggest and most timeconsuming task of the project. The overall function ofthe micro-controller is to program the processor so thatit is able read inputs, process the information, then sendthe appropriate signal to the output. Research on an ap-propriate micro-controller will be vital. The best micro-controller should be able to do two things. We needone that will be able to process information as quick aspossible, while still being lightweight and compact. Priceis also important when considering a micro-controllersince they can vary in price, finding one that has themost reasonable price is definitely a top priority. Thisshould be the one of the items we invest more moneyinto ensuring that a good product is at the heart of thedevice.

G. Battery

Along with the micro-controller, another importantaspect of the belt buckle is the battery. It will needto be able to power all the electronics which includes:the processor, the motors for the vibrators, Bluetooth,an amplifier for the microphones. The battery will needto be relatively small, rechargeable, and would lastthroughout the day. In order to find a battery that canprovide us with all of those things, doing research issomething we need to focus on. We need to search forone that is light in weight, rechargeable with a highamount of milliamp hours, such as 2,500 mAh. We willalso need to do a power analysis on all of the electronics.Doing this would allow us to figure out the appropriatesized battery to last a day. The battery will need to bepretty inexpensive as well to allow money to be spenton more important aspects of the project.

H. Connection Ports

To connect everything within our device, connectionpoints will need to be established. We will need waysto plug in the microphones to the belt buckle as well asconnect the microphone to the processor. These connec-tions should be small in order for all the parts to fit inside

the belt buckle. The connectors should also be made ofplastic because it will make the belt buckle more lightweight. Although connectors are cheap in price, orderingthem as early as possible will benefit us in developingthe device faster. The belt buckle will be built aroundthe connectors and must accommodate them.

I. Waistband

The waistband should be comfortable and light weightas it will be worn around all day. Finding one that islightweight and made of an elastic material would beideal. This hopefully would allow users to feel com-fortable and not constricted while wearing it. The strapshould be adjustable to allow for any body type. Thewaist band not only needs to be comfortable but it shouldalso be wide enough for the vibrating motor discs.

J. Vibrating Motor Disc

Since the whole concept of the device is to alertthe user of incoming danger by vibrations, the vibrat-ing motor disc is a vital feature. The vibrating motordiscs will need to be small and efficient. Tests will beperformed to find out the exact power consumption inorder for there to be enough battery for all aspects ofthe device. The level of vibration should also be takeninto consideration because it needs to be able to warn theuser, but not be too alarming. The buzzers will need ableto operate at variable speeds. This is important becausethe varying speeds of vibrations will determine what kindof approaching danger there is. We need to research tofind a small, efficient vibrating motor disc that will becheap in cost. Since these devices are quite simple, theprice will depend on exactly how small we want thevibrating motors discs to be.

K. User Interface

The user interface will be responsible for communi-cation between the user and the micro-controller. This isan important feature of the Sonic Vibe, however, workon this cannot begin until later on when we know exactlyhow the other features will be implemented. For exam-ple, we will need to determine the best micro-controllerto use before we can pick the Bluetooth for it and startinterfacing with the smart phone. The user interface willallow the user to control the intensity of the vibratingmotor discs as well as calibrate the microphones to theircurrent environment. This will be accomplished throughthe use of a smart phone application that communicateswith the micro-controller via Bluetooth.

Page 11: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

10

L. Smart Phone Application

The smart phone application will be built using Phone-gap. First, the smart phone will connect to the micro-controller via Bluetooth on the phone to a Bluetoothmodule on the micro-controller and the user will beable to open an application on the smart phone. In thisapplication, the user will be able to adjust the intensity ofthe vibrating motor discs and there will be a button thatcalibrates the microphones to the current environment.This will be done by sampling the current noise levelto determine a threshold for which the vibrating motordiscs will alert the user of sound.

Next semester, we would like to add a GPS feature tothe logs portion of the Phone Application. We wouldalso like to add functionality for the user to be ableto delete incorrect logs. This will help to improve themachine learning portion of the project. Additionally,we would like to add functionality to toggle betweenauto-calibration and manual calibration for the soundthreshold. After the first semester, we will have onlyimplemented manual calibration.

M. Bluetooth

Bluetooth will be used for communication between themicro-controller and the phone application because of itsmany advantages. The main advantage of Bluetooth isthat it is wireless. Since the microphones will alreadyhave a wired connection to the micro-controller, anyadditional wires will be avoided to make the SonicVibe more user friendly and convenient. Bluetooth isalso inexpensive and it comes standard with most smartphones. Other advantages include being simple to usewith minimal compatibility issues. Bluetooth also hasimpressive data transfer speeds and since the phone andthe micro-controller will always be within a small dis-tance of each other, we don’t have to worry about Blue-tooth not working well over long distances. Somethingthat will need to be considered will be battery powerbecause despite the low processor power and batterypower consumption, the battery can still be drained ifBluetooth is left activated.

N. Artificial Neural Network

There are different approaches for classifications. Oneapproach is through machine learning which uses algo-rithm. According to Rob Schapire, machine learning isa studies of computer algorithm that uses previous datato identify new data and make decisions without humanintervention [14]. Wikipedia organization recognizes 14different types of classification for machine learning[16]. Out of the 14 types of classification, the project

Fig. 12. Two Inputs, One Layer ANN

will focus mainly on artificial neural network using per-ceptron, Woodrow-Holf method, and backpropagation.These methods are heavily relied on in machine visionor computer vision; thus, the theory is that they shouldbe applicable to machine hearing for Sonic-Vibe.

The idea of machine hearing sounds or noises andbeing able to identify what it hear is almost similar tomachine vision. Where machine vision is the idea thatthe computer can identify what it is seeing. Machinehearing is a relatively new field of study as mentionby an IEEE article written by Lyon [15]. There is notmuch information or guidance by IEEE community onmachine hearing; but the article suggested to use algo-rithm from machine vision and applied it to data gatheredfrom sensors based on listening. The Sonic-Vibe willimplement two types of method, Woodrow-Holf andbackpropagation, that will be applied to machine hearingfor this project and the outcome will be interesting,unique, original. For the next 8 months, the outcomeproduce by this project will be sound waves that themachine can identify instead of the original plan which isbased off of activated sensors and compared data valuesprovided by the microphones sensors.

This section will provide a brief introduction to anartificial neural network (ANN). The main goal for ANNis to create an algorithm based on humans’ brain compu-tation [17]. A brief explanation of how ANN works is bymanipulating input data and then it is added back to newdata points. This process is repetitive until the machinefinally gets a desired output that the programmer sets.An example of ANN is illustrated in the Figure 12 fromM.T. Hagan [19]. The figure is a two inputs with onelayer. This is the foundation of all the other methods forANN.

Page 12: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

11

ANN is beneficial for Sonic-Vibe because the outputfor Sonic-Vibe is to detect the direction of sound and theinput values are non-linear data that is being computedby the ANN algorithm. The expected features will takeat least 2 months to research and implement into theSonic-Vibe. The cost of ANN is relatively inexpensiveunless books or hired consultants are used to write thealgorithm for ANN.

After implementing ANN into Sonic-Vibe, the nextstep is to improve the algorithm for decision making.This is done by either improving ANN or by adding asecond source of reference to help the machine makea decision. By doing this, the machine will be moreaccurate in pinpointing the direction of the sound. Thesecond method of improvement for classifications isfeatures. There are two types of features, the NaiveBayesian Classifier (NBC) or Voting Feature Interval(VFI) [18]. Both use the same concept for features, butVFI is more efficient than NBC. The conceptual idea isto have particular features for the original objects. Whena new object is detected by the machine, the machinewill compare the features of the new object to the oldobjects and will start the voting process. The votingprocess is accomplished by if the new object featurematches one of the original object features then thatobject will receive one vote. This process repeats until allthe new object’s features are compared separately to theoriginal objects’ features. Once the process is finished,then the object with the most votes is considered tobe that object. In comparison with the Sonic-Vibe, theobjects are directions with a set of features which areused to help detect the direction of the new signalreceived from the sensors. The time expectancy for thisnew improvement will take at least 3 months. The costis similar to that of ANN.

VI. RISK ASSESSMENT AND MITIGATION

An essential part of every project is risk assessment. Inaddition to a problem statement, work breakdown struc-ture, and time-line; risk assessment is crucial to ensuringsuccessful project completion. Not only is there riskassociated with projects in general, but each individualproject task carries a unique set of risks. These risks caninvolve financial, technical, or timing issues among otherthings. It is important to identify these risks at an earlystage of the project so that they may be assessed in termsof likelihood and impact. Then, each risk can be catego-rized using the risk matrix and an appropriate mitigationplan can be chosen. Risk mitigation is often a choicebetween eliminating the cause of the risk, transferring therisk to another team member, or assuming the risk andcontinuing with the original plan. After determining the

likelihood and impact, each team member will choosethe best mitigation plan for the risks associated with theparts of the project they are responsible for.

A. Microphones

The microphones are the first stage of the device, sothey must be evaluated in great detail so that the deviceworks as intended. This means that the risks associatedto the design of the microphone system include noiseassociated with the environment from low quality micro-phones. To mitigate the damage these risks could cause,one option is to buy expensive parts that will be morelikely to work well. Another option is to have multiplemicrophones available for testing. The last option is tohave another team member help.

B. Waistband

The waistband is low risk but it will have a highimpact on the overall project. The highest impact relatedto the waistband will be comfort. Our goal is to makethe waistband as comfortable and light weight as possiblewhile still being fashionable. If necessary we will haveto make a trade-off between comfort and size to anaffordable price.

The battery within the belt buckle will take up themost space and will need to be small. The mitigation forthe battery is simply to spend more money on a better,more efficient battery, otherwise the entire device willneed to be recharged more often. The battery used willdepend on the microcontroller and the buzzers.

The buzzers will need to be as small as possibleand efficient. They will need to be able to produce anoticeable buzz with use of little current. If we continueto have problems with powering up the buzzers, we willeither look for more efficient buzzers or use a batterysolely for the buzzers.

The design of the belt buckle shouldn’t cause anylarge problems. If we have trouble creating somethingthat can house everything we will need to consult with amechanical engineer to design a more compact buckle,utilizing all free space. For the waistband itself, it willneed to be made with comfort as the main priority.The waistband will ideally be adjustable to fit peopleof all sizes and shapes. If we are unable to create acomfortable waistband, we will have to hire a tailor tosew together a nice strap that can house the buzzers.Also if the waistband cannot be adjusted, we will haveto make multiple sizes to accommodate different users.With the waistband being a high impact to the project,the need for alternatives is essential. These alternativesinclude everything from an extra battery to hiring exter-nal professionals.

Page 13: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

12

C. Phone Application

The User Interface is an essential piece of our project.It enables the user to interact with the hardware of ourdevice through customizing settings and viewing logsof the recorded data. Without a UI, our device wouldpractically be useless to a user. We have chosen to createa User Interface for our device in the form of a phoneapplication. This application will consist of a GUI, out-bound communication to the microcontroller to calibratethe microphones and control the buzzer intensity, andinbound communication from the microcontroller to lognoises detected throughout the day. In implementing this,we will likely face risks involving technical and timingissues.

One technical risk that is involved in writing any codeis human error that can result in bugs. The likelihood ofbugs occurring is low-medium and can carry a medium-high impact depending on the severity of the bug. Theappropriate course of mitigation is avoidance thoughelimination of the consequence. Extensive debugging andtesting will be important in mitigating this risk.

Another technical risk involved in the phone applica-tion is the inability for the application to communicatewith the microcontroller. The method of communicationchosen to do this is Bluetooth. Bluetooth is extremelycompatible and easy-to-use so the likelihood of theapplication being unable to communicate with the micro-controller is low. However, communication between thetwo is crucial to our project so the risk has a high impact.One possible mitigation would be to avoid this issue byeliminating the cause and using wired communicationinstead. However, we have opted to assume the risk andcontinue with the plan.

The obvious risk associated with timing would be theinability to complete the phone application in the allottedtime. Since we have carefully planned out our projectschedule, the likelihood of this happening is very low.However, as mentioned before, the User Interface is cru-cial to the project so not completing it would impact ourproject tremendously. One option for mitigation wouldbe to avoid this risk by sticking to the project scheduleand completing the phone application on time. However,things can happen that may knock the responsible teammember off of the course of the schedule. In this case,another option would be to transfer other responsibilitiesto another team member to make more time to completethe phone app.

D. Artificial Neural Network

There are many different implementations of ANNsuch as feed forward, feedback, unsupervised, super-

vised, and Hopfield networks. Finding the most effectiveANN algorithm is one of the risks associated with it.Another risk is combining two separate algorithms intoone. By integrating ANN into the developed algorithm,the chances of the conflict between the two codes andslow computational time is high. The last projectedrisk is implementing ANN into the project. This meansthat ANN must have least two distinct features thatwill help with classification of the desired outputs. Theprobability of the risks occurring is frequent; however,the ANN is an optional feature for the project. Thismeans that if ANN stopped the progression of the projectthen it’s acceptable to remove the feature for the finalpresentation. Thus, ANN is consider to be at a mediumrisk.

E. Microcontroller

The microcontroller is responsible for hosting themicrophone connections, processing incoming soundwith the algorithm discussed in the previous section,and sending PWM signals to the vibration motors. Ifthe microcontroller is too slow, it will be noticeablein its inability to produce good results or its inabilityto produce good results at a fast enough rate. Thisintroduces a risk into the project, which can be mitigatedby having multiple microcontroller options available.Three microcontrollers that are available are the Arduino,Raspberry Pi, and Propeller Chip. The first choice isthe Raspberry Pi because it has built-in USB connec-tions, which can host two microphones and a Bluetoothconnection simultaneously. Also, programming can beaccomplished in Python, which is an easier language touse than C or C++. If the Raspberry Pi fails to performall of the necessary tasks, the Arduino or PropellerChip must be ready to use. One of the fall backs ofthe alternatives is that the microcontrollers do not havenative USB ports, which means that audio must beprocessed by analog electronics before being sent to themicrocontroller. Also, a dedicated Bluetooth module isnecessary for communication to a mobile phone.

F. User Discretion

This device is meant to assist the hearing impairedwith identifying the direction of sound to avoid dan-gerous accidents with the surrounding environment. Theproject is not meant to identify the type of sounds andwill not physically move the user to avoid collisionswith oncoming objects. The project is also not designto aid the user in hearing nor improve the user’s hearingcapability. The user will use the device at their own risk.

Page 14: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

13

VII. TASK ASSIGNMENTS

The work will be divided out evenly among all thegroup members and everyone has been assigned a task.To divide the tasks, we took into account everyone’sstrengths and experience. The sole responsibilities ofeach team member are listed below.

A. Eric and the Microphones

With Eric’s knowledge of audio circuits and love forprocessing music, we gave him the task of creating theheadband. Creating the headband will require findingthe right microphones and head strap. Eric will also beresponsible for creating a code that will read the volumelevel of each microphone and process that information.He will also need to figure out how to connect the head-band to the waistband. We have estimated his featureswill need around 115 hours to be implemented.

B. Jason and the Waistband

Jason has experience building CAD models and willbe responsible for 3D printing the buckle. He will alsobe in charge of creating the waist strap that will theninclude the vibrating mini motor discs. Within the strap,Jason will be in charge of connecting all the vibratorsto the processor. The last thing he is in charge of iswriting a code that will power the corresponding vibratordepending on which microphone receives the loudestsignal. The estimated time to create the waist band andbuckle and everything programmed is about 120 hours.

C. Yakira and the User Interface

Yakira brings a wide variety of vital skills to thisproject. Working at CamelBak gave her valuable experi-ence in project management, teamwork, and sticking toproject schedules. In addition to her exceptional writtencommunication skills, Yakira also has excellent verbalcommunication skills from her experience as a tourguide at North Bay Brewery Tours. These attributeswill be useful for the papers and presentations requiredthroughout this project. Coursework in Computer En-gineering and a software developer internship at POSPortal has shaped Yakira’s knowledge in hardware withan emphasis in software and programming. These skillsmake her more than qualified to design and implement aphone application for this project as well as perform herduty as team leader for a portion of the semester. Yakirawill be responsible for creating the GUI for multiplesmartphones. She will also need to find a Bluetoothdevice that is compatible to the processor we decideto use. The last thing she is in charge of is creating a

code for the processor to connect to the phone usingBluetooth. The estimated time for Yakira’s project isaround 115 hours.

D. Ou and the Artificial Neural Network

Ou has taken computer science, digital signal pro-cessing, and advance mathematics courses; thus, Ou isresponsible for the Artificial Neural Network portion ofthis project. Ou will do research on the different typesof ANN and choose the best method for this project.Ou will determine what is needed for classifications andhow to implement ANN into the project algorithm. Ouis also designated to any future usage of ANN for thisproject. Ou is also responsible for writing any code thatis related to ANN. The estimated time for Ous portionis around 120 hours.

VIII. CONCLUSION

Although our project was designed and implementedsuccessfully, there are a few adjustments that will bemade in the Spring semester. The most important changewill be making the device wireless. This means that theheadband must be able to communicate to the waistbandand the phone simultaneously. Additionally, it will beimportant to reduce the size of the device so that it iseasily worn and comfortable. Since we don’t need all ofthe features that the Raspberry Pi offers, we can use asmaller microcontroller, which will also enable us to usea small power supply. The phone app will be improvedby adding real-time logging of the data it receives, whichwill include location, time, and the type of sound thatwas recognized. Another feature will also be added tothe phone application that will allow the user to deleteincorrect logs. This functionality will also improve themachine learning portion of the project.

REFERENCES

[1] American Hearing Research Foundation. Noise Induced Hear-ing Loss. American Hearing Research Foundation. Elmhurst,IL. http://american-hearing.org/disorders/noise-induced-hearing-loss/intensity. 2012.

[2] Healthwise. Medicines that Cause Hearing Loss. Healthwise.http://www.webmd.com/a-to-zguides/medicines-that-cause-hearing-loss-topic-overview. 2014.

[3] T. Chong. (2014). Futuristic Firefighter Suit HasSensors, Heads-up Display [Online]. Available FTP:http://spectrum.ieee.org/consumer-electronics/portable-devices/futuristic-firefighter-suit-has-sensors-headup-display

[4] R. F. Fahy, P. R. LeBlanc, and J. L. Molis. (2015).Firefighter Fatalities in the United States [Online].http://www.nfpa.org/research/reports-and-statistics/the-fire-service/fatalities-and-injuries/firefighter-fatalities-in-the-united-states

Page 15: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

14

[5] G. Derugin (2014) [Online]. Sony ECM-CS3 Stereo LavalierMicrophone http://www.filmbrute.com/ecmcs3/

[6] The Center for Construction Research andTraining. (2012). Noise-Induced Hearing Lossin Construction and Other Industries [Online].http://www.cpwr.com/sites/default/files/publications/CB

[7] Hearing Health Foundation. Hearing Loss and TinnitusStatistics. Hearing Health Foundation. New York, NY.http://hearinghealthfoundation.org. 2015.

[8] Institute of Physics. Alarms and Sirens. Institute of Physics.London, UK. http://www.physics.org/featuredetail.asp?id=75.

[9] National Institute on Deafness and Other Communica-tion Disorders. Quick Statistics. NIDCD. Bethesda, MD.http://www.nidcd.nih.gov/health/statistics/pages/quick.aspx.

[10] R. Lichenstein, D. C. Smith, J. L. Ambrose, and L. A. Moody,Headphone use and pedestrian injury and death in the unitedstates: 2004 -2011, 3rd ed.Injury Prevention. 18, no. 5: 287-2902012.

[11] Frank A. Saunders, An Electrotactile Sound Detector for theDeaf, in IEEE Transactions on Audio and ElectroAcoustics, Vol.AU-21, NO.3, June 1973.

[12] Robert I. Damper and Mike D. Evans, A Multifunction Do-mestic Alert System for the Deaf-Blind, in IEEE Transactionson Rehabilitation Engineering, Vol. 3, NO.4, December 1995.

[13] Bohumir Uvacek and George S. Moschytz, Sound AlertingAids for the Profoundly Deaf, Swiss Federal Institute of Tech-nology Zurich, Institute for Signal and Information Processing,1987.

[14] R. Schapire. (2008). ”COS 511: TheoreticalMachine Learning”. [Online]. Available FTP:http://www.cs.princeton.edu/courses/archive/spr08/cos511/scribe

notes/0204.pdf[15] R. F. Lyon. (2010). ”Machine Hearing: An

Emerging Field”. [Online]. Available FTP:http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36608.pdf

[16] Wikipedia Organization. (2015). ”Machine learning”. [Online].Available FTP: https://en.wikipedia.org/wiki/Machine learning

[17] University of Toronto. (2015). ”Artificial NeuralNetworks Technology”. [Online]. Available FTP:http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural2.html

[18] G. Demirz, H. A. Gvenir (2005). ”Classification byVoting Feature Intervals”. [Online]. Available FTP:http://link.springer.com/chapter/10.1007/3-540-62858-4 74#page-1

[19] M. T. Hagan, H. B. Demuth, M. H. Beale, and O. D. Jess.”Neural Network Design” 2nd Edition. Oklahoma State Univer-sity. Martin Hagan 2014.

[20] J. Mrovlie and D. Vrancic, Distance measuring based onstereoscopic pictures, 1st ed. Izola, Slovenia: October 2008.

APPENDIX AUSER MANUAL

A. How to Wear

The device will strap around the waist and have a strapadjuster that can be tightened for a comfortable fit. Theband is elastic and will not need to be overtightened.

Figure 13 shows the device on the user and theappropriate location for the waistband and headband. Theuser will need to keep in the mind the position of thebuzzers and the buckle. The buckle will need to go in

Fig. 13. Front of Device

Fig. 14. Back of Device

Page 16: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

15

Fig. 15. Homepage for the Phone Application

from on the stomach of the user. The buzzers will needto be located on each side of the body and one located onthe back. Figure 14 shows the microphones and how theback side of the device looks and should be worn. Themicrophones need to be facing outward and opposite ofeach other.

B. How to Use the Phone Application

First, download the Sonic-Vibe app onto your smart-phone from the App Store or from Google Play. Next,power on the Sonic-Vibe device. Make sure Bluetooth isturned on on your phone and discover/pair your phonewith the Sonic-Vibe. Once paired, open the phone app.There will be two buttons-’Settings’ and ’Logs’ as shownin Figure 15.

1) Settings: Figure 16 shows the Settings window inwhich you will have the option to adjust the thresholdfor the microphones and the intensity for the buzzers.In order for the changes to take effect, you must pressthe ’Apply’ button. The ’Cancel’ button as well as the’Back’ button will take you back to the homepage.

2) Logs: Figure 17 shows the Logs window in whichyou will have the option of viewing today’s logs or the

Fig. 16. Settings Page for the Phone Application

logs for the last 7 days. Clicking on either button willlead you to a table of logs. The ’Back’ button will takeyou back to the Logs page and the ’Back’ button on theLogs page will take you back to the homepage.

C. Tips

Remember to power off the Sonic-Vibe device whennot in use so that you can save battery. Also, rememberto disable the Bluetooth on your phone when not usingthe device as leaving it on can quickly drain the batterypower of your phone.

APPENDIX BHARDWARE AND SOFTWARE DOCUMENTATION

The hardware portion of this project consists of themicrophones, the microcontroller, and the buzzer. Thephone application and the code for the microcontrollermakes up the software portion.

A. Determining the Direction of Sound

The two microphones on the headband are placedsimilarly to the way our ears are situated. This spa-tial difference between the two microphones causes a

Page 17: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

16

Fig. 17. Logs Page for the Phone Application

measurable time difference between the signal capturedby each microphone, and there will also be a slightdifference in the pressure levels. For example, if a soundis made to the left of the user of our device, the soundwill reach the left microphone first and the pressure levelwill be slightly higher than the pressure level detected bythe right microphone. If a sound is made directly behindthe user, the sound reaches both of the microphones atthe same time with the same pressure level.

In order to measure the time delay between twosignals, we implemented cross-correlation via the FastFourier Transform (FFT). Cross-correlation in the timedomain between two similar signals will produce amaximum amplitude at the time index where the signalsbest match each other. If one signal is delayed relativeto the other, the time index reveals the amount of delaybetween the two signals, which we can use to determinewhere the sound came from.

It is useful to also take into account the differencein the sound level between the two microphones todetermine the direction of sound.

In the Python program that performs the function ofthe block diagram shown in figure 18, some conditional

Fig. 18. Determining Time Delay Between Similar Sounds

statements must be made in order to output the directionof a sound to the motors on the waistband. If we wantto consider the difference in sound pressure as well asthe time delay, the number of conditional statementsdoubles. By combining the two measurements (timedelay and magnitude difference) into an artificial neuralnetwork (ANN), it is possible to reduce the numberof conditional statements that must be checked beforeoutputting a signal to the motors.

B. ANN Makes Everything Better

It is possible to determine the direction of sound justfrom time delay, but there will occasionally be errors.One way to reduce the amount of error is to use the dif-ference in pressure level between the microphones, alongwith the time delay, and create a learning algorithm thatuses these two features to better determine the directionof sound. The assumption for ANN in an offline learningenvironment, not constantly being updated with newinformation, is that the differences are always consistent.

Page 18: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

17

When the sound is coming from the left, the differencesshould have certain values that distinguished it from backor right. The new differences from the left, when beingused online, should not interfere with preexisting backor right data points.

Once the data are consistent then this will allowfor the correct weights and bias to be calculated usingWoodrow-Holf. The main difference between Woodrow-Holf and Perceptron is that Woodrow-Holf used a learn-ing rate. The following algorithm is used for calculatingthe weights and bias for an offline Woodrow-Holf. X isthe inputs that classified left, right, and back. Y is thetarget output that determine the direction of the sound.X has two unique differences for classifications and Yis the targeted value which help determine the directionof the sound.

X=P;Y=T;[m1,m2]=size(X);[n1,n2]=size(Y);%Initialize W and b to zeroW=[0 0];b=0;

it = 10;i=0; j=0;%learning ratelr=0.2;

% for i=1:it %Number of iterations% for j=1:m2for i=1:it

for j=1:m2a = W*X(:,j)+b;e = Y(:,j) - a;W = W+lr*e*X(:,j)’;b = b + lr*e;

endend

The next step is to use the calculated weights andbias to determine the direction of the sound from theincoming data. The incoming data should be the twodifferences of the microphones. Below is the equationthat is used for the calculation where α is the desiredoutput and γ is the input from the calculated differencesbetween the two microphones.

α = Weights ∗ γ + bias

If the desired output is within close proximity of thetargeted output Y, then the sound is coming from that

targeted output direction. This will allow ANN to betterdetermine the direction of the sound.

’’’Calculate the time delay betweentwo microphonesDetermine if sound is coming fromLeft,Right or MiddleOutput direction to a specificmotor (or LED) with GPIO

’’’

from numpy import fromstring,log10from pylab import fft,ifft,conj# For cross correlationfrom operator import itemgetterimport timeimport pyaudio

# Initialize Buzzersimport RPi.GPIO as GPIOGPIO.setmode(GPIO.BOARD)GPIO.setup(7, GPIO.OUT)GPIO.setup(18, GPIO.OUT)GPIO.setup(16, GPIO.OUT)

# Intensity PID of ThresholdFreq = 50Intensity = 40p1 = GPIO.PWM(7, Freq) #RIGHTp2 = GPIO.PWM(18, Freq) #MIDDLEp3 = GPIO.PWM(16, Freq) #LEFT# END OF Initialization of buzzers

p = pyaudio.PyAudio()FS=48000 # Sample rate#Chunk size of data to readCHUNK = 4800*5threshold = 60.0

Lag = 0Center = 0

# Initialize streams for two mics#with pyaudioMic1 = p.open(format=pyaudio.paInt16,

channels=1,rate=FS,input=True,

input_device_index = 2,# This depends on the computer

Page 19: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

18

frames_per_buffer=CHUNK)

Mic2 = p.open(format=pyaudio.paInt16,channels=1,rate=FS,input=True,

input_device_index = 1,frames_per_buffer=CHUNK)

’’’class RingBuffer:def __init__(self,size_max):self.max = size_maxself.data = []def append(self,x):"""append an element at theend of the buffer"""self.data.append(x)if len(self.data) == self.max:self.cur=0self.__class__ = RingBufferFulldef get(self):

""" return a list of elementsfrom the oldest to the newest"""

return self.data

class RingBufferFull:def __init__(self,n):raise "you should use RingBuffer"def append(self,x):self.data[self.cur]=xself.cur=(self.cur+1) % self.maxdef get(self):return self.data[self.cur:]+self.data[:self.cur]’’’

# Cross correlation#measures how similar# two signals are. It#can be calculated in# the frequency domain,#which should be faster#The maximum value of the#inverse FFT of the#cross correlated signals#should be located at#the time index that corresponds

#to the time lag

def xcorr(data1,data2):D1 = fft(data1)D2 = fft(data2)M1 = 20*log10(max(data1))M2 = 20*log10(max(data2))delta = M1 - M2print "Mic 1:",M1,"dB"print "Mic 2:",M2,"dB"

Z = abs(ifft(conj(D1)*D2))Lag,value=max(enumerate(Z),key=itemgetter(1))

if (abs(Lag-len(data1)) > abs(Lag)):pass

else:Lag = Lag - len(data1)

if((M1<threshold)&(M2<threshold)):Center=Lag

#print "Current center is",Center

if((M1>threshold)|(M2>threshold)):

# if(M1 > 90):# Intensity = 90# elif(M1 < 60):# Intensity = 60# elif((M1 <= 90) & (M1>= 60)):# Intensity = 5/3*M1-50

if( (Lag > Center + 6) & (delta > 5)):p3.start(Intensity) # Turn Left Buzzer ontimeL = time.clock()#return "Left\n"

elif( (Lag < Center - 6) & (delta < -5)):# Turn Right Buzzer onp1.start(Intensity)#return "Right\n"timeR = time.clock()

elif( (Center + 6 < Lag < Center -6)& (-4<delta<4)):

# Turn Middle Buzzer onp2.start(Intensity)#return "Middle\n"timeM = time.clock()

Page 20: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

19

if( (time.clock() - timeL) > 1):p3.start(0)

if( (time.clock() - timeR) > 1):p1.start(0)

if( (time.clock() - timeM) > 1):p2.start(0)

# return index# This is what we are looking for

try:while True:

data1=fromstring(Mic1.read(CHUNK),’int16’)data2=fromstring(Mic2.read(CHUNK),’int16’)# Calculate the cross correlationlag = xcorr(data1,data2)#print lag

except KeyboardInterrupt:Mic1.stop_stream()Mic1.close()Mic2.stop_stream()Mic2.close()p.terminate()p1.stop()p2.stop()p3.stop()GPIO.cleanup()

C. The Phone Application

The phone application was written using PhonegapDesktop, which allows for quick deployment to bothiPhones and Androids. It is written primarily in HTML5, Java-Script, and Cascading Style Sheets (CSS) for theGUI and Java for the button and Bluetooth logic. Figure19 shows the high-level block diagram of the SonicVibe, which gives a light overview of how the PhoneApplication interacts with the rest of the project. Thephone application sends output through Bluetooth to themicrocontroller. This output is a number that representscalibration settings for either the buzzer intensity, themicrophone threshold, or both. When the microphonespick up sound that gets sent to and handled by the micro-controller, the microcontroller then sends data throughBluetooth to the Phone App. This data will include atime, the type of sound, and the direction from whichthe sound came from and it will be stored in the Logssection of the phone application.

Figure 20 shows a component diagram of the phoneapplication. Again, the input from the microcontroller

Fig. 19. High-level Block Diagram of the Sonic-Vibe

goes to the logs part of the phone application via Blue-tooth and the User interacts with the phone applicationto retrieve the logs data. On the other hand, the userinteracts with the phone application to view and changethe settings, which are then sent over Bluetooth to themicrocontroller.

D. Test Plans

Placeholder for next semester

APPENDIX CRESUMES

Page 21: Sonic-Vibe Design Documentationathena.ecs.csus.edu/~vossougj/Sonic_Vibe_Documentation.pdf · 2016-03-16 · 1 Sonic-Vibe Design Documentation Eric Carmi, Yakira Johnson, Ou Saejao,

20

Fig. 20. Phone App Component Diagram