53
Music Processing in Deaf Adults with Cochlear Implants by Mathieu R. Saindon A thesis submitted in conformity with the requirements for the degree of Master of Arts Graduate Department of Psychology University of Toronto © Copyright by Mathieu R. Saindon 2010

Music Processing in Deaf Adults with Cochlear Implants...A cochlear implant (CI) is a prosthetic device designed to provide hearing sensations to deaf individuals. Unquestionably,

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

  • Music Processing in Deaf Adults

    with Cochlear Implants

    by

    Mathieu R. Saindon

    A thesis submitted in conformity with the requirements for the degree of Master of Arts

    Graduate Department of Psychology University of Toronto

    © Copyright by Mathieu R. Saindon 2010

  • ii

    Music Processing in Deaf Adults with Cochlear Implants

    Mathieu R. Saindon

    Master of Arts

    Graduate Department of Psychology University of Toronto

    2010

    Abstract

    Cochlear implants (CIs) provide coarse representations of pitch, which are adequate for speech

    but not for music. Despite increasing interest in music processing by CI users, the available

    information is fragmentary. The present experiment attempted to fill this void by conducting a

    comprehensive assessment of music processing in adult CI users. CI users (n =6) and normally

    hearing (NH) controls (n = 12) were tested on several tasks involving melody and rhythm

    perception, recognition of familiar music, and emotion of recognition in speech and music. CI

    performance was substantially poorer than NH performance and at chance levels on pitch

    processing tasks. Performance was highly variable, however, with one individual achieving NH

    performance levels on some tasks, probably because of low-frequency residual hearing in his

    unimplanted ear. Future research with a larger sample of CI users can shed light on factors

    associated with good and poor music processing in this population.

  • iii

    Acknowledgments

    This thesis would have not been possible without the constant efforts, guidance and dedication of

    my supervisors Dr. Sandra Trehub and Dr. Glenn Schellenberg. I am very grateful for all of their

    help with this research project.

    I would also like to thank my parents and sister for their long-distance support, and my lovely

    wife Lauren for baking all of those muffins.

    Lastly, I would like to thank the health professionals and patients of the Sunnybrook Cochlear

    Implant Program. Without them, this project would not have been possible.

  • iv

    Table of Contents

    Acknowledgments .......................................................................................................................... iii

    Table of Contents ........................................................................................................................... iv

    List of Tables ................................................................................................................................. vi

    List of Figures ............................................................................................................................... vii

    List of Appendices ....................................................................................................................... viii

    1 Introduction ................................................................................................................................ 1

    2 Method ....................................................................................................................................... 4

    2.1 Participants .......................................................................................................................... 4

    2.2 Apparatus ............................................................................................................................ 6

    2.3 Test Battery ......................................................................................................................... 6

    2.3.1 Metric task .............................................................................................................. 7

    2.3.2 Rhythm task ............................................................................................................ 7

    2.3.3 Distorted Tunes Test ............................................................................................... 8

    2.3.4 Musical emotion test ............................................................................................... 8

    2.3.5 Diagnostic Analysis of Nonverbal Accuracy 2 ....................................................... 9

    2.3.6 Open-set word recognition ...................................................................................... 9

    2.3.7 CAMP test ............................................................................................................... 9

    2.3.8 Familiar music task ............................................................................................... 10

    2.3.9 Pitch- and interval matching ................................................................................. 11

    2.4 Procedure .......................................................................................................................... 11

    3 Results and Discussion ............................................................................................................. 12

    3.1 Open-Set Word Recognition ............................................................................................. 12

    3.2 CAMP ............................................................................................................................... 12

  • v

    3.3 Distorted Tunes Test ......................................................................................................... 15

    3.4 Familiar Music Task ......................................................................................................... 17

    3.5 Metric Task & Modified MBEA Rhythm Task ................................................................ 20

    3.6 Music Emotion & DANVA2 ............................................................................................ 22

    3.7 Pitch- and Interval-Matching Task ................................................................................... 25

    3.8 Conclusion ........................................................................................................................ 27

    References ..................................................................................................................................... 29

    Appendix ....................................................................................................................................... 36

  • vi

    List of Tables

    Table 1. Participant Characteristics……………………………………………………………….5

    Table 2. List of CVC Words………………………………………………………………………9

    Table 3. Music Emotion Arousal Scores……………………………………………………...…24

  • vii

    List of Figures

    Figure 1. Syllable (CVC) Recognition…………………………………………………………...13

    Figure 2. CAMP Pitch Threshold (semitones)………………………………….....……………..13

    Figure 3. CAMP Melody Recognition (Percent Correct)….…………………………………….14

    Figure 4. CAMP Timbre Recognition (Percent Correct)..……………………………………….14

    Figure 5. Distorted Tunes Test…………………………….……………………………………..16

    Figure 6. Familiar Music – No-Rhythm Condition (Percent Correct)…………………………...17

    Figure 7. Familiar Music – Melody Condition (Percent Correct)………………………………..18

    Figure 8. Familiar Music – Instrumental Condition (Percent Correct)…………………………..19

    Figure 9. Metric Task…………………………………...………………………………………..21

    Figure 10. Modified MBEA Rhythm Task………………………………………………………22

    Figure 11. Music Emotion Task.…………………………………………………………………24

    Figure 12. DANVA 2: Adult Vocal Emotion Task…...…………………………………………25

    Figure 13. Average Deviations in Pitch (semitones)…….………………………………………26

    Figure 14. Deviations in Interval Matching (semitones)……………...…………………………26

  • viii

    List of Appendices

    Appendix A. Music and Cochlear Implants Questionnaire…………………………………...…36

    Appendix B. Music Background Information Questionnaire (Adults)…………………………..42

    Appendix C. Music and Cochlear Implants Interview…………………………………………...44

    Appendix D. Semi-Structured Interview……………………………………………………..….45

  • 1

    1 Introduction

    A cochlear implant (CI) is a prosthetic device designed to provide hearing sensations to

    deaf individuals. Unquestionably, it is the most successful neural prosthesis to date, as viewed by

    the number of individuals who have received it worldwide and derived great benefit from it

    (Wilson, 2004). Its external microphone and signal processor receive incoming sound, transform

    it into an electrical signal, and extract features that are important for speech perception. This

    information is then transmitted to electrodes implanted in the cochlea, and, in turn, to the

    auditory nerve.

    Modern devices provide relatively coarse representations of spectral information, which

    are adequate for perceiving speech in ideal listening conditions (Shannon, Zeng, Kamath,

    Wygonski & Ekelid, 1995; Wilson, 2000), but they are inadequate for perceiving speech in noise

    (Fetterman & Domico, 2002; Firszt et al., 2004), identifying emotion from speech prosody

    (Hopyan-Misakyan, Gordon, Dennis & Papsin, 2009; Meister, Landwehr, Pyschny, Walger &

    Von Wedel, 2009), differentiating one speaker from another (Meister et al., 2009), identifying

    musical timbres or instruments (McDermott & Looi, 2004), and recognizing melodies from pitch

    cues alone (Kang et al., 2009; Kong, Cruz, Ackland-Jones & Zeng, 2004).

    Music perception is especially challenging for CI users. Coding strategies in implant

    processors extract the temporal envelope, discarding the temporal fine structure that is critical for

    music perception (Galvin, Fu & Shannon, 2009). Consequently, the music perceived by CI users

    is considerably degraded in sound quality and detail, especially as it pertains to pitch patterning.

    In fact, implant users often describe music as unpleasant, mechanical, and difficult to follow

    (Gfeller, Christ, Knutson, Woodworth, Witt & DeBus, 1998; Gfeller, Witt, Stordahl, Mehr &

    Woodworth, 2000; Lassaletta et al., 2007). It comes as no surprise, then, that postlingually

    deafened adult CI users, who had access to rich auditory representations of music before their

    hearing loss, are often disappointed with music heard via their implant (Gfeller, Christ, Knutson,

    Witt, Murray & Tyler, 2000; Lassaletta et al., 2007; Looi & She, 2010; Veekmans, Ressel,

    Mueller, Vischer & Brockmeier, 2009). This is unfortunate because music is an important source

    of pleasure for many, if not most, hearing individuals (Laukka, 2006). Even for postlingually

  • 2

    deafened implant users, quality of music perception is associated positively with quality-of-life

    ratings (Lassaletta et al., 2007).

    As noted, limited temporal fine structure or spectral detail provides limited access to

    pitch patterning. Cooper, Tobey, and Loizou (2008) used a test battery designed for the diagnosis

    of amusia, or tone deafness, in individuals with normal audiological profiles. They found that CI

    users failed to discriminate two melodies that differed in pitch patterning even when the

    difference involved a change in pitch contour or key. In that sense, CI users performed much like

    amusic individuals who are typically deficient in the perception of pitch patterns but not

    temporal patterns (Foxton, Nandy & Griffiths, 2006). In other research, CI users have exhibited

    difficulty determining which of two sounds is higher in pitch (also referred to as pitch ranking –

    see Kang et al., 2009; Looi, McDermott, McKay & Hickson, 2004, 2008), detecting the direction

    (higher or lower) of a pitch change in a melody (Gfeller et al., 2007; Leal et al., 2003), and

    differentiating melodies in the absence of rhythmic cues (Kang et al., 2009; Kong et al., 2004).

    In the context of these pitch perception difficulties, it is not surprising to find deficient pitch

    production as well. For example, child CI users preserve the rhythms but not the pitch contours

    (i.e., patterns of rising and falling pitches) when they sing familiar songs (Nakata, Trehub,

    Mitani & Kanda, 2006; Xu et al., 2009). Although this pattern is mirrored, to some extent, in the

    song production of amusic individuals, some individuals with severe pitch perception deficits

    manage to produce accurate contours and intervals when singing familiar songs with words,

    which reveals an unexpected dissociation between perception and action (Dalla Bella, Giguère &

    Peretz, 2009).

    By contrast, tempo and rhythm perception in CI users are reportedly comparable to

    normally hearing (NH) listeners except when the stimuli or tasks are complex (Cooper et al.,

    2008; Gfeller, Woodworth, Robin, Witt & Knutson, 1997; Kong et al., 2004). Although we have

    learned much in recent years about the music perception skills of CI users, much remains to be

    learned. For example, the perceptual demands of differentiating simple rhythm or pitch patterns

    differ drastically from the demands of perceiving conventional music on the radio, on iPods, or

    in concert halls. Rhythm, pitch, and timbre are typically blended into a coherent whole.

    Discriminating two rhythms in isolation does not mean that a CI user would be able to hear a

    guitar solo when it is accompanined by a drum kit, bass, guitar, and vocals. He or she might also

    be unable to pick out the recurring cello melody in a Beethoven symphony. In short, there is little

  • 3

    understanding of CI users’ ability to perceive music as they might hear it on a recording or at a

    concert.

    In addition to providing pleasure and contributing to quality of life, music perception

    skills underlie the perception of emotion in speech as well as music (Juslin & Laukka, 2003).

    Emotion in speech is conveyed primarily by musically relevant cues such as loudness, tempo or

    rate, rhythm, pitch height, pitch range, and pitch contours. For example, expressions of anger in

    speech and music typically involve rapid tempo and increased amplitude or loudness in contrast

    to expressions of sadness, which typically involve slow tempo, low pitch, and decreased

    loudness. Although word recognition is obviously crucial for successful verbal communication,

    it is difficult to discern a speaker’s true emotions and communicative intentions without access to

    paralinguistic and prosodic cues. To date, however, there has been little research on CI users’

    perception of emotion in speech and none on their perception of emotion in music.

    The goal of the present study was to provide a comprehensive assessment of the music

    perception skills of adult CI users who became deaf postlingually. The perception of rhythm and

    pitch perception was assessed. The perception of emotion conveyed through speech and music

    and pitch production were also assessed. Rhythm perception was assessed in the context of

    simple rhythmic patterns as well as melodies with accompaniment. Adding accompaniment to a

    simple rhythm test used previously with adult CI users (Cooper et al., 2008) made it possible to

    determine whether “normal” rhythm perception skills remained evident in ecologically valid

    musical contexts. Melody perception was assessed by means of tasks that required comparisons

    of the musical input with long-term representations of music. The perception of emotion in

    speech was assessed with a task that has been used with child CI users (Hopyan-Misakyan et al.,

    2009). Although child CI users were unsuccessful at differentiating vocal emotions, it is possible

    that adult CI users, by virtue of their previous access to acoustic information and their greater

    understanding of communicative conventions, might be more successful than children at this

    task. Finally, we tested open-set word recognition, using monosyllabic consonant-vowel-

    consonant words, as a check on CI users’ use of bottom-up cues in speech.

    Large individual differences are pervasive in CI outcomes. Factors influencing outcomes

    among postlingually deafened adults include duration of near-total deafness (i.e., little or no

    benefit from hearing aids) before implantation, with shorter durations having more favorable

  • 4

    outcomes (Van Dijk, Van Olphen, Langereis, Mens, Brokx & Smoorenburg, 1999); cognitive

    abilities (Pisoni & Cleary, 2004); integrity of the auditory nerve and central auditory system

    (Hartman & Kral, 2004; Leake & Rebscher, 2004); and relevant experience or training (Fu,

    Nogaki, & Galvin, 2005; Galvin, Fu & Nogaki, 2007). Adults with residual hearing immediately

    prior to implantation perform better on subsequent recognition of speech and environmental

    sounds than those without usable residual hearing even though implantation destroys the residual

    hearing (Van Dijk et al., 1999). Moreover, CI users with music training in high school, college,

    or later exhibit better music perception (Gfeller et al., 2008). Based on these findings and our

    own specific goals, we designed a questionnaire that could potentially shed light on individual

    differences in performance. Information was solicited about education, history of hearing loss

    and implantation, implant characteristics, music listening and music-making habits, and music

    training. We expected CI users to perform poorly compared to NH listeners except on the test of

    simple rhythm discrimination. We also expected performance to be affected by duration of

    deafness before implantation, musical exposure and training, and residual hearing, if any, in the

    unimplanted ear. Finally, we expected CI users to perform better on musical materials that were

    highly familiar to them than on those that were less familiar or unfamiliar.

    2 Method

    2.1 Participants

    The target participants were adult CI users (n = 6) 46-76 years of age (M = 62.2, SD = 13.0; see

    Table 1) who were recruited from the Cochlear Implant Program of Sunnybrook hospital in

    Toronto. All of them were postlingually deafened, they communicated solely by auditory-oral

    means, and they expressed some interest in music. Additionally, they all reported progressive

    hearing losses that were gradual, except for one participant. Although she experienced

    substantial hearing loss when she was very young, her bilateral hearing aids were very helpful

    until 6 years ago when she experienced a precipitous loss of most of her residual hearing. One

    participant used a hearing aid in his unimplanted ear to amplify his residual hearing selectively at

    500 and 250 Hz (90 and 70 dB thresholds, respectively). With respect to musical background,

    three CI users had taken music lessons in the past, but only two were still playing music.

  • Table 1. Participant Characteristics

    Participant M/F Age Device(s) Type of CI

    Hearing loss

    onset (age)

    Progressive loss

    (yes/no)

    Hearing aid use (years)

    Implant use

    (years)

    Music lessons (years)

    Current instrument

    Weekly music listening (hours)

    CI-1 F 47 2 CIs Advanced

    Bionics 1 yes

    sudden 40 6 0 No 7 – 10

    CI-2 M 46 CI + HA Cochlear 5 yes

    gradual 35 5 5 Yes 10 or more

    CI-3 F 67 CI Advanced Bionics 57 yes

    sudden 5 5 7 No 4 – 7

    CI-4 F 74 CI Advanced Bionics 35 yes

    gradual 30 4 23 Yes 1 – 4

    CI-5 M 76 CI +HA Med-El 58 yes gradual 12 2 0 No 1 – 4

    CI-6 F 63 2 CIs Cochlear 10 yes

    gradual 35 17 0 No 1 – 4

    5

  • 6

    The control group consisted of normally hearing (NH) listeners (n = 12) 19-58 years of

    age (M = 29.0, SD = 13.8) with no personal or family history of hearing problems. A few

    participants in the control group had received music lessons as children, but only two had

    substantial musical training. One of these was a professional musician.

    2.2 Apparatus

    Testing was conducted in a double-wall sound-attenuating chamber (Industrial Acoustics Co.,

    Bronx, NY). A computer workstation and amplifier (Harmon-Kardon 3380, Stamford, CT)

    outside of the booth interfaced with a 17-in touch-screen monitor (Elo LCD TouchSystems,

    Berwyn, PA) and two wall-mounted loudspeakers (Electro-Medical Instrument Co., Mississauga,

    ON) inside the booth. The touch-screen monitor was used for presenting instructions for all tasks

    and for recording participants’ responses. The loudspeakers were mounted at the corners of the

    sound booth, each located at 45 degrees azimuth to the participant, and the touch-screen monitor

    was placed at the midpoint. Sound files were presented between 60 and 65 dB, according to the

    preferences of each participant. One CI user (CI-2) requested sound levels up to 75 dB. CI

    participants were free to alter the settings on their processor in the course of the test session.

    2.3 Test Battery

    Trials for the Metric Task (from Hébert & Cuddy, 2002), the Rhythmic subtest of the Montreal

    Battery for Evaluation of Amusia (MBEA; Peretz, Champod & Hyde, 2003), the Distorted Tunes

    Test (DTT; Drayna, Manichaikul, de Lange, Snieder & Spector, 2001), the Music Emotion Task

    (Veillard, Peretz, Gosselin, Khalfa, Gagnon & Bouchard, 2007), the Diagnostic Analysis of

    Nonverbal Accuracy Scale 2 (DANVA2; Nowicki & Duke, 1994; Baum & Nowicki, 1998), and

    the individualized Familiar Music Task were presented via a customized program created with

    Affect 4.0 (Hermans, Clarysse, Baeyens & Spruyt, 2005; Spruyt, Clarysse, Vansteenwegen,

    Baeyens & Hermans, 2010). FLXLab 2.3 software (Haskell, 2009) was used to arrange the

    presentation of the Word Recognition, Pitch-Matching, and Interval-Matching tasks. The entire

    Clinical Assessment of Music Perception (CAMP) test, which was designed for cochlear implant

    users (Kang et al., 2009), was also adminstered.

  • 7

    2.3.1 Metric task

    The rhythms comprising this task were the strong-meter rhythms from Hébert and Cuddy (2002).

    These rhythms were created with SoundEdit 16, version 2.0. A temporal interval was defined as

    the onset-to-onset time (IOI) of successive events, with all events consisting of the sound of a

    snare drum. The basic IOI was 200 ms, and IOIs varied in a 1:2:3:4 ratio, with IOIs of 200, 400,

    600, and 800 ms. Each standard rhythm consisted of a different permutation of nine IOIs (five

    IOIs of 200 ms, two of 400 ms, one of 600, and one of 800 ms). All tones were of equal intensity

    (i.e., no amplitude accents) and duration (100 ms). To create strong metric patterns, longer IOIs

    occurred on the beat. There were 4 practice trials (2 same, 2 different) with visual feedback

    (correct, incorrect) provided on the monitor followed by 20 test trials (10 same, 10 different)

    presented in random order with no feedback. On each trial, participants received a standard and

    comparison drum pattern, and they were required to judge whether they were the same or

    different. On “same” trials, the standard and comparison patterns were identical. On “different”

    trials, one 400-ms IOI from the standard pattern was replaced by an 800-ms IOI. Participants

    responded by touching “same” or “different” on the touch-sensitive monitor. They also touched

    the monitor to proceed to the following trial, at their own pace.

    2.3.2 Rhythm task

    The principal modification to the Rhythmic subtest of the MBEA (Peretz et al., 2003) was the

    addition of accompaniment, as described below. The test consisted of 31 trials without feedback

    preceded by training trials consisting of two examples with feedback. Participants listened to

    two tonal melodies and judged whether they were the same or different. Differences consisted of

    alterations in the duration of two adjacent tones, which changed the rhythmic grouping but not

    the meter or number of tones. Rhythmical patterns varied across melodies. The melodies spanned

    a total frequency range of 247 (B3) to 988 Hz (B5), with the smallest range being 247 to 311 (E-

    flat-4) Hz, and the largest range 247 to 784 (G5) Hz. Melodies had 7 to 21 notes and were 3.8 to

    6.4 s in duration (M = 5.1 s), depending on the tempo (100, 120, 150, 180, and 200 bpm). Tone

    durations varied from 150 to 1800 ms depending on the rhythm and tempo of each melody.

    Synthesized piano versions of the melodies were used.

  • 8

    For the present purposes, accompaniment consisting of sampled bass, guitar (strummed

    chords), and drum kit sounds created by means of Cakewalk Music Creator (Version 5.0.4.23;

    Roland, Hamamatsu, Japan) was added to all of the melodies. Amplitude was standardized for

    each instrumental track across all melodies. Participants were told that accompaniment had been

    added to increase the difficulty of the task. They were asked to base their judgments of similarity

    or difference entirely on the piano melody. Participants called for trials by touching the monitor

    and entered their responses (same or different) on the monitor.

    2.3.3 Distorted Tunes Test

    This test (Drayna et al., 2001) required participants to judge whether synthesized piano

    performances of 26 short melodies (12-26 notes) that are well-known in the U.K. and North

    America were correct (no pitch errors) or distorted (one or more pitch errors). Of the 26 tunes, 9

    were played correctly, and 17 were distorted by pitch changes (i.e., errors) in 2-9 notes, within

    one or two semitones of the correct note but maintaining the melodic contour (rise and fall) of

    the normal melody. The errors in the melodies resulted in out-of-key notes in all but one melody

    (stimulus no. 13). All melodies in the DTT were unaltered in rhythm. The majority of tunes (17

    out of 26) were played incorrectly, but there is no indication of performance differences on intact

    or distorted versions (Drayna et al., 2001).

    2.3.4 Musical emotion test

    This task, from Veillard et al. (2007), required participants to identify the predominant emotion

    conveyed by short musical excerpts as happy, sad, angry, or scary. The excerpts, representing

    five of the most readily identified excerpts from each emotion category, as determined in a

    preliminary study (Hunter, Schellenberg, & Stalinski, submitted), were MIDI files set to piano

    timbre. The happy excerpts were in the major mode with a mean tempo of 137 beats per minute

    (bpm) and the melodic line in a medium-to-high pitch range. The sad excerpts were in the minor

    mode, with a mean tempo of 44 bpm, medium pitch range, and sustain pedal. The peaceful

    excerpts were in the major mode, with an intermediate tempo of 69 bpm, a medium pitch range,

    and also the sustain pedal. The scary excerpts had minor chords on the third and sixth degree, a

    mean tempo of 95 bpm, and a low-medium pitch range. Mean stimulus duration was 13.3 s for

    all emotional categories.

  • 9

    2.3.5 Diagnostic Analysis of Nonverbal Accuracy 2

    The Adult Paralanguage subtest of the DANVA2 (Baum & Nowicki, 1998) assessed the ability

    to perceive emotion through non-verbal speech cues. In this test, a semantically neutral sentence

    (“I’m going out of the room now, but I’ll be back later”) was spoken with happy, sad, angry, or

    fearful intentions at two levels of emotional intensity by a male and female actor.

    2.3.6 Open-set word recognition

    As a check on basic speech perception skills, CI users and NH listeners were required to repeat

    20 isolated consonant-vowel-consonant (CVC) words (see Table 2) produced by a female

    speaker. This task, like others in the battery, was self-administered and self-paced. Each stimulus

    word was preceded by a visual warning signal on the monitor (+), and participants’ responses

    were recorded.

    Table 2. List of CVC words

    back beach chain cup doll

    fan food gum jar leg

    love map meat nut pen

    pig run sit sun talk

    2.3.7 CAMP test

    This music perception test (Kang et al., 2009) had subtests of pitch direction discrimination,

    melody recognition, and timbre recognition. The pitch subtest used an adaptive procedure (1-up

    1-down) to determine the threshold for pitch direction discrimination within the range of 1 to 12

    semitones. On each trial, listeners indicated whether the first or second of two tones was higher

    in pitch. The melody subtest assessed recognition of widely known melodies presented without

    rhythmic cues (i.e., all tones of equal duration). On each trial, listeners identified the melody

  • 10

    from a set of 12 alternatives. In the timbre subtest, listeners heard a five-note sequence (the same

    one on all trials) and were required to identify the instrument from a set of eight alternatives.

    Stimuli for the pitch direction and melody subtests consisted of synthesized, complex tones with

    uniform spectral envelopes to preclude temporal envelope cues. Stimuli for the timbre subtest

    consisted of recordings of professional musicians playing real instruments. The pitch subtest was

    preceded by four practice trials and the melody and rhythm subtests were preceded by training

    sessions, in which participants were required to listen to each stimulus twice before beginning

    the test phase.

    2.3.8 Familiar music task

    Stimuli for this task were personalized for CI users and NH listeners based on music that was

    most familiar to them. Prior to their laboratory visit, participants provided a list of up to 10

    musical selections (title, album and recording artist) that they heard regularly. Five selections

    were included in the test, along with five unfamiliar selections from the same genre (as listed on

    ITunes) and with similar tempi. The familiar music task had three conditions: (1) no rhythm, (2)

    melody only, and (3) original instrumental. The original instrumental versions consisted of 10-s

    excerpts with salient melodic content from each selection, which were extracted with Audacity

    software (Version 1.3.11 Beta). If the musical selection did not have 10 s without vocal content,

    the vocals were removed with Vocal Remover Version 2 plugin from Audacity. Melodic content

    from all selections was transcribed to produce two monophonic WAV files per selection – a

    melody version and a no-rhythm version. These excerpts were produced with a synthesized flute

    timbre from Cakewalk Music Creator. In contrast to the melody version, which maintained the

    rhythm, the no-rhythm version was isochronous (i.e., all tones of equal duration). The original

    pitch durations were maintained in the no-rhythm version by means of repeated tones at the

    pitches in question. On each trial, participants listened to the selection and identified it from a set

    of six alternatives, which consisted of the five familiar musical pieces and “none of the above.”

    The conditions were administered in fixed order from most to least difficult: (1) no rhythm; (2)

    melody; and (3) original instrumental.

  • 11

    2.3.9 Pitch- and interval matching

    The stimuli for this task consisted of eight pitches 1-2 s in duration sung by a man and woman

    and eight ascending intervals sung by the same individuals in a legato (continuous or

    uninterrupted) manner. The male stimuli ranged from B3 (246.942 Hz) to B4 (493.883 Hz), and

    the female stimuli ranged from B4 to B5 (987.767 Hz). Each pitch and interval stimulus was

    presented twice in a predetermined order, with the pitch-matching task presented first.

    Participants were required to sing back what they heard, and their responses were recorded by

    means of FLXLab. The intervals always began on the first degree of the scale (B3 for male

    stimuli and B4 for female stimuli). Only pitches from the key of B major were used, which

    resulted in the following intervals: unison, octave, major 2nd, 3rd, 6th and 7th, and perfect 4th

    and 5th. Pitches and intervals of the imitations were calculated by means of Praat software

    (Version 5.1.43; Boersma & Weenink, 2010).

    2.4 Procedure

    Prior to their laboratory visit, implant users completed a questionnaire (see Appendix A) that

    included information about demographic background (e.g., history of hearing loss, implant

    experience, education, languages spoken), musical background (e.g., musical training, music

    listening habits before and after their hearing loss, music enjoyment) and familiar musical

    selections. NH participants completed a questionnaire about their musical background and

    subjective experience of music (see Appendices B and C) just before the test session.

    Test sessions with CI users began with a semi-structured interview designed to elicit

    information about their subjective experience of music (see Appendix D). All interviews were

    recorded with a Sony Net MD Walkman (MZ-N707 model) and a Sony electret condenser

    microphone (ECM-DS70P model). Once the interview was completed, participants were

    escorted to the sound-attenuating booth for administration of the test battery. The experimenter

    provided instructions before each component of the battery. These instructions were repeated on

    the touch-screen monitor prior to each task. Participants were also told that the sounds could be

    made louder or softer, according to their preference. Tasks were presented in fixed order.

    Participants were told that the pitch- and interval-matching tasks, which were the in the test

    battery, were strictly optional.

  • 12

    3 Results and Discussion

    Due to the small sample size and large individual differences among CI users, we examined

    performance individually for each task, noting the CI users who performed within one SD of the

    mean for NH listeners, those who performed within two SDs, and so on. On the basis of previous

    research, CI users were expected to perform much better on tests of rhythm and meter and on

    other tasks based on timing cues than on those based on pitch cues (Cooper et al., 2008; Kang et

    al., 2009; Kong et al., 2004).

    3.1 Open-Set Word Recognition

    As one would expect, performance of the NH group was at ceiling (see Figure 1) such that there

    was no variance in the data. CI users also performed well on this task, with scores ranging from

    100% correct (CI-2) to a low of 20% correct (CI-4). Performance on these isolated monosyllabic

    words sheds light on how well each CI user was using bottom-up cues that are relevant to speech

    perception. It should be emphasized, however, that the CI participants were uniformly excellent

    at perceiving speech in quiet backgrounds when contextual cues were available, as confirmed in

    lengthy, individual interviews. Variability on open-set recognition tasks has been reported in a

    number of other studies (Loizou, Poroy, & Dormann, 2000; Vandali, Whitford, Plant, & Clark,

    2000). CI-4, who had the poorest performance on this task, had the longest delay from the time

    her hearing aids became ineffectual (i.e., no usable residual hearing) until implant surgery. The

    top performer, CI-2, had a number of advantages, including professional knowledge of hearing

    and assistive technologies as well as residual low-frequency hearing (at 250 and 500 Hz) in his

    unimplanted ear, which was selectively amplified. Zhang, Dorman, and Spahr (2010) have

    documented the contribution of low-frequency acoustic hearing to the recognition of

    monosyllabic words.

    3.2 CAMP

    Pitch-detection thresholds are illustrated in Figure 2, whereas melody recognition and timbre

    recognition are illustrated in Figures 3 and 4, respectively. The mean threshold for pitch-

    direction identification for NH listeners was 1.3 semitones (SD = 0.8), whereas their average on

    the melody-recognition task was 88.0% (SD = 10.0%) and their average result on the timbre-

    recognition task was 85.6% (SD = 16.0%). The means for the CI users group were 4.6 semitones

  • 13

    Figure 1. Mean score and standard deviation on the Speech Perception Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

    Figure 2. Mean score and standard deviation on the CAMP Pitch Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

  • 14

    Figure 3. Mean score and standard deviation on the CAMP Melody Recognition Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

    Figure 4. Mean score and standard deviation on the CAMP Timbre Recognition Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

  • 15

    (SD = 2.7) for the pitch-ranking task, 25.7% correct for the melody task (SD = 25.5), and 45.8%

    for the timbre task (SD = 19.5). Results for both groups were very similar to those reported

    previously by the developers of the test (Kang et al., 2009). Because the CAMP tests examine

    the ability to perceive pitch and timbre cues, it is not surprising that most of the CI users did not

    do well on these tasks.

    Two CI users (CI-1 and CI-2) managed to perform particularly well on pitch-direction

    identification, falling within one SD of the mean of the NH group. The Melody task, which

    excluded all timing cues, proved to be more difficult. In fact, two CI users (CI-5 and CI-6) opted

    to discontinue the task because of its extreme difficulty. Moreover, no CI user was able to obtain

    a score within two SDs of the NH mean, although CI-2’s performance was substantially better

    than that of the other CI users. His score was 63.9%, whereas the average of the scores of the

    three other CI users was 13.0%. CI-2 also scored much higher than other CI users on the timbre

    identification task, obtaining a score of 83.3% correct, which was near the NH mean. None of the

    other CI users had a score within two SDs of the NH mean, although CI-3 came close. The

    amplified residual hearing of CI-2 undoubtedly accounts for his success and for his ability to

    play in a musical ensemble. The contribution of hearing aids in the unimplanted ear to music

    perception has been noted previously (Looi, McDermott, McKay, & Hickson, 2007; Turner,

    Reiss & Gantz, 2008).

    3.3 Distorted Tunes Test

    The DTT comprised 26 questions and two response options on each trial, such that chance

    performance was a score of 13. As in the original study that used the DTT (Drayna et al., 2001)

    and an additional study by the same research team (Jones et al., 2009), the scores of NH listeners

    were near ceiling (M = 24.7, SD = 1.3; see Figure 5). Because the DTT comprises traditional

    North-American folk melodies, our CI users, who were on average much older than our control

    group (mean age of 62.7 vs 29.0), would have been more familiar with these melodies before

    they became deaf. Nonetheless, CI users had extreme difficulty on this task. Their mean score

    was 11.8 correct (SD = 2.8), and scores for all of the CI users were more than two SDs below the

    NH mean and near chance levels. In fact, the highest score was 16 correct (CI-2). Because

    mistunings on the DTT (except for one) are created by using pitches outside the key of each

    melody, the findings indicate that CI users are unable to use tonality-related cues when

  • 16

    perceiving music. This interpretation is consistent with previous performance of CI users on the

    Scale subtest of the MBEA, which was at chance levels (Cooper et al., 2008).

    It is notable that CI-2, despite his good pitch resolution and reasonable performance on

    other tasks, was unable to do this task, which involved comparing current melodies with long-

    term representations of those melodies or making judgments based on tonality. As noted, the

    pitch errors in this test are relatively small (one or two semitones). Considering the mean score

    of CI users on the CAMP pitch-ranking task (threshold of 4.6 semitones), it is not surprising that

    CI users were unable to perceive mistunings in the DTT melodies. The authors of the DTT

    created these errors to be salient by virtue of their violations of tonality. Thus, it is not surprising

    that these violations are not salient to CI users.

    Figure 5. Mean score and standard deviation on the Distorted Tunes Test for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

  • 17

    3.4 Familiar Music Task

    Scores for this task were the number of correct answers out of 10, converted into percent correct

    scores. It was possible to generate individualized materials for only five participants from the

    NH group. Mean scores were 74.0% (SD = 15.2%) in the No-rhythm condition (Figure 6),

    92.0% (SD = 4.5%) in the Melody (with timing cues) condition (Figure 7), and 94.0% (SD =

    5.5%) in the Instrumental condition (Figure 8), which featured all or most cues from the original

    recordings, except for the lyrics for selections involving songs. CI users’ scores were

    exceedingly low. Moreover, they were lowest in the No-Rhythm condition (M = 26.7%, SD =

    25.2%), slightly higher in the Melody condition (M = 35.0%, SD = 19.1%), and highest in the

    Instrumental condition (M = 70.0%, SD = 18.3%). Two CI users were excluded from

    consideration because they provided artists with whom they were familiar (e.g., Louis

    Armstrong, Frank Sinatra) but no specific musical selections. Of the four remaining CI users,

    one discontinued the No-Rhythm condition because of its difficulty. CI-1 and CI-2 scored below

    two SDs of the NH mean for this condition. Although CI-4 managed to score within two SDs of

    Figure 6. Mean score and standard deviation on the No-Rhythm Condition of the Familiar Music Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

  • 18

    the NH mean, she did so only by responding “none of the above” for all of the trials. Obviously,

    she was unable to recognize any melodies without rhythmic cues. Although CI users fared better

    in the Melody condition than in the No-Rhythm condition, all four failed to score within two SDs

    of the mean for NH listeners. In the Instrumental condition, CI-2 obtained a score similar to the

    NH mean (90.0% versus 94.0%, respectively). Although CI-4 and CI-1 obtained higher scores in

    the Instrumental condition than in the other two conditions, they were still more than two SDs

    below the NH mean.

    Figure 7. Mean score and standard deviation on the Melody Condition of the Familiar Music Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

  • 19

    The Familiar Music Task was created specifically for this study. The expectation was that

    the use of highly familiar music would generate better performance than one would predict based

    on the available literature. In fact, CI children have shown some success in the recognition of

    specific recordings that they hear regularly (Vongpaisal, Trehub & Schellenberg, 2006 & 2009)

    even though such children are generally unsuccessful at recognizing generic versions of

    culturally familiar tunes (Olszweski, Gfeller, Froman, Stordahl, & Tomblin, 2005; Stordahl,

    2002). However, this was not the case for the current group of adult CI users. Of the six CI users

    in the present study, three (CI-1, CI-5 and CI-6) reported in their interview that the lyrics were

    the most salient part of their music listening experiences. However, lyrics were excluded from

    the test materials, even when recordings with vocals were selected as familiar music, because

    they provided obvious cues to the identity of the music.

    CI-4, who listened to classical music and attended concerts frequently, was unable to

    recognize the original recordings (same music and performers) that she heard regularly. CI-6 was

    also unable to recognize the four instrumental pieces that were among her 10 selections, which

    suggests that her at-home listening experiences are guided and enriched by knowledge of what

    Figure 8. Mean score and standard deviation on the Instrumental Condition of the Familiar Music Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

  • 20

    she is playing. CI-2, the “star” performer in the present study, indicated that he listens especially

    closely to the bass line in music. This follows, perhaps, from programming his hearing aid to

    capitalize on his residual low-frequency hearing. CI-2 is also a bass player who performs with an

    amateur blues/rock group. It is nevertheless impressive that this participant was as proficient as

    NH listeners at identifying the familiar instrumental excerpts.

    During her interview, participant CI-6 shed light on factors contributing to her musical

    preferences. She stated that, in order to enjoy music, it had to have meaning, such as a narrative.

    For example, she very much enjoyed the lyrics in a number of the selections she submitted.

    Although she also selected instrumental pieces, some of them were orchestral works with

    underlying narratives. For example, Symphony No. 11 by Dmitri Shostakovich, entitled “In The

    Year 1905,” depicts the Russian revolution. Another of her selections, the orchestral work

    “Finlandia” by Jean Sibelius, depicts the Finnish struggle to break free from the Russian empire.

    Because CI users do not have access to the acoustic details available to NH listeners, they may

    find other ways of enjoying music. The enjoyment of CI-6 was enriched by a narrative linked to

    the overall structure of the musical work rather than its melodies or harmonies. CI-6 described

    hearing the Cossacks charging on their horses in the work by Shostakovich, and the struggles and

    the triumph of the Finnish people in the Sibelius piece.

    Another factor that may have contributed to CI users’ difficulty at identifying the

    material was the 10-second duration of the excerpts, which posed no problem for NH listeners. It

    is possible that they would be somewhat more successful with longer samples of the music.

    3.5 Metric Task & Modified MBEA Rhythm Task

    Because the Metric task comprised 20 questions and each trial had two response options,

    chance responding was a score of 10. The mean of the NH group was 17.1 (SD = 3.4; see Figure

    9), which is similar to the mean of a NH group tested on the same task in a previous study

    (Hopyan, Schellenberg & Dennis, 2009). CI-1 received a perfect score on this task. CI-2 and CI-

    3 scored within one SD below the NH mean, and CI-4 and CI-5 scored within two SDs. CI-6 was

    below two SDs and also below chance levels for this task. In short, the majority of CI users (5 of

    6) were within two SDs of the mean for NH listeners, which is in line with CI users’ previous

    success in discriminating simple rhythmic patterns (Gfeller et al., 1997; Kong et al., 2004).

  • 21

    The modified subtest of the MBEA had 31 trials and two response options on each trial,

    such that chance performance was a score of 15.5. The NH group mean was 26.9 (SD = 3.6, see

    Figure 10), which is virtually identical to a sample of individuals with normal music perception

    skills reported by Peretz et al. (2003) who were tested on a similar task without accompaniment

    (M = 27.0, SD = 2.1). Similar performance across studies indicates that the additional

    instrumentation created for the purpose of this study did not impair the performance of NH

    listeners. By contrast, the average performance of CI users on our modified task was only 17.0

    (SD = 2.1), which is substantially lower than the mean obtained by CI users tested by Cooper et

    al. (2008; approximately 24 correct) on the original MBEA rhythm task. Although CI-2 was

    slightly less than two SDs below the NH mean, the other CI users were near or below chance

    levels, which confirms that the additional instrumentation impeded their ability to perceive the

    rhythm of the melodic line.

    Figure 9. Mean score and standard deviation on the Metric Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

  • 22

    Although CI users fared as well as NH listeners on the original version of this rhythm

    discrimination task, which involved monophonic piano melodies (Cooper et al., 2008), their

    rhythm discrimination was impaired when there were multiple streams of auditory information.

    In fact, almost all of the CI users performed near chance level. This finding suggests that CI

    users would have difficulty discerning the rhythms encountered in their everyday experience of

    music.

    3.6 Music Emotion & DANVA2

    The Music Emotion task comprised 20 questions and 4 response options on each trial, such that a

    score of five correct responses corresponded to chance responding. Once again, NH listeners

    were near ceiling on this task (M = 19.2, SD = 1.4; see Figure 11), which is slightly higher than

    the results reported by Hunter et al. (submitted) for adult listeners, who had an average of 16.7

    correct. All CI users were more than two SDs below the NH mean, with a mean of 12.7 (SD =

    3.6).

    Figure 10. Mean score and standard deviation on the Modified MBEA Rhythm Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

  • 23

    Because CI users are better able to perceive timing cues than pitch cues, we examined the

    possibility that CI users could interpret arousal, which is based largely on tempo cues, better than

    valence, which is based on mode (major/minor) and consonance/dissonance cues. Thus, we

    combined the response options based on arousal: happy or scary vs. sad or peaceful (see Table 3

    for arousal scores). For three of the CI users (CI-2, CI-4 and CI-5), a majority of the errors (over

    50%) on this task involved confusions between stimuli that contrasted in valence but were

    similar in arousal. These findings suggest that tempo cues play a substantially greater role than

    mode cues in CI users’ perception of emotion in music. This interpretation is consistent with

    reports of adequate tempo perception in CI users (Kong et al., 2004). Tempo cues are also more

    important than mode cues for young children (Dalla Bella, Peretz, Rousseau, & Gosselin, 2001),

    not because of pitch resolution difficulties but because they have not yet learned Western

    musical conventions about mode.

    The DANVA2 comprised 24 trials and four response options on each trial (happy, angry,

    sad, and fearful) such that a score of six correct corresponded to chance responding. The mean

    for the NH listeners was 19.3 (SD = 2.3; see Figure 12), which is similar to the mean reported by

    Nowicki (2006; M = 18.0, SD = 2.9). The average score for the CI users was only 10.8 (SD =

    3.3). Only CI-2 and CI-6 performed within two SDs of the NH mean, with the remaining CI

    users having lower scores and three performing at close to chance levels (CI-3, CI-4, CI-5).

    Performance on the DANVA2 by child CI implant users in the study by Hopyan-Misakyan et al.

    (2009) was similar to adult CI users in the present study in that both groups were unsuccessful in

    differentiating the vocal emotions.

    The DANVA2, which has been used widely (Nowicki, 2006), is intended to be a

    challenging test, with average NH scores ranging from 14 to 18.5 out of 24. Among its

    advantages is that it allows specially gifted individuals to achieve higher scores than the

    population mean. However, this test may not be the most appropriate means of assessing CI

    users’ access to emotion cues in speech. A test involving a greater range of emotional

    expressiveness would enable us to learn more about this skill in CI users.

  • 24

    Table 3. Music Emotion Arousal Scores Participant Test score (20) Modified arousal score (20) Valence errors

    CI-1 17 18 33% CI-2 13 18 71% CI-3 15 17 40% CI-4 14 18 67% CI-5 7 12 38% CI-6 10 16 60%

    Figure 11. Mean score and standard deviation on the Music Emotion Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

  • 25

    3.7 Pitch- and Interval-Matching Task

    Only CI users were asked to complete the pitch- and interval-matching tasks, which were

    described as strictly optional. With the exception of one CI user, who was short of time, all

    agreed to complete the matching tasks. The overwhelming majority of NH individuals can match

    pitches within one semitone (Moore, Estis, Gordon-Hickey, & Watts, 2008). For CI users, the

    mean error in pitch matching (Figure 13) was 3.9 semitones (SD = 3.1). Only CI-2 performed

    within the expected range of NH listeners, with a mean pitch error of 1.1 semitones. Performance

    in the interval-matching task (Figure 14) was similar. Errors on interval matching (Figure 14)

    were comparable to those on pitch matching (M = 3.1 semitones, SD = 2.0). Again, CI-2

    performed surprisingly well, with a mean error of 1.0 semitones on interval matching, which is in

    line with his low pitch-ranking threshold on the CAMP test (1.6 semitones).

    Figure 12. Mean score and standard deviation on the DANVA2 Adult Vocal Emotion Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

  • 26

    Figure 13. Individual average pitch deviations in semitones on the Pitch-Matching Task for cochlear implant (CI) users.

    Figure 14. Individual average pitch deviations in semitones on the Interval-Matching Task for cochlear implant (CI) users.

  • 27

    3.8 Conclusion

    In sum, postlingually deafened adult CI users performed well below the level of the NH control

    group on most tasks in the present study. Their performance was especially poor on tasks that

    relied strongly on pitch cues, such as the DTT, isochronous melody tasks, familiar melody task,

    pitch ranking, and pitch matching. They had more success on the simple rhythm discrimination

    task but not on the more complex rhythm discrimination task. They also had poor results on the

    emotion discrimination tasks, which required the joint use of pitch and timing cues.

    As in most studies of CI users, there were large individual differences in performance.

    CI-2 performed considerably better than other CI users, especially on the pitch-ranking and

    pitch-matching tasks. Although his musical background may have played some role, it is likely

    that amplified residual hearing in his unimplanted ear made the most important contributions to

    his success on the tasks involving pitch. Along with musical training and residual hearing, CI-2

    had the further advantage of formal training in audiology and familiarity with hearing aid

    technology. As he put it, he programmed his own hearing aid to “act like a subwoofer,” which

    enables him to maximize his perception of music and speech. In his interview, CI-2 indicated

    that neither his implant nor his hearing aid alone provided a satisfactory representation of sound

    but together they provided a credible and highly enjoyable rendition of music. In short, the whole

    was a lot better than the sum of its parts. Plans for re-testing CI-2 with his implant alone will

    provide a clearer picture of the independent contributions of implant and hearing aid.

    CI-4 had extensive musical training (piano) and even considered a career as a musician

    when she was a young woman with normal hearing. Her progressive hearing loss over the years

    and a long period of very poor auditory reception with hearing aids seemed to erase any potential

    benefit from her training and knowledge of music. For CI-2, by contrast, gradual hearing loss

    began at about 5 years of age and his hearing aids functioned effectively for music listening until

    approximately five years before receiving his implant.

    Plans to enlarge the sample will make it possible to identify links between various

    background variables and performance on music processing tasks such as these. It would be of

    interest to determine whether limited training enhances music processing in CI users and their

  • 28

    ability to derive pleasure from music. Such training may also have favorable consequences for

    other auditory but non-musical tasks. These questions can be addressed in future research.

  • 29

    References

    Baum, K., & Nowicki, S., Jr. (1998). Perception of emotion: Measuring decoding accuracy of

    adult prosodic cues varying in intensity. Journal of Nonverbal Behavior, 22, 89 – 109.

    Boersma, P., & Weenink, D. (2010). Praat: Doing phonetics by computer [computer software].

    Retrieved August 17, 2010, from http://www.fon.hum.uva.nl/praat/download_win.html.

    University of Amsterdam, The Netherlands.

    Cooper, W.B., Tobey, E., & Loizou, P.C. (2008). Music perception by cochlear implant and

    normal hearing listeners as measured by the Montreal Battery for Evaluation of Amusia.

    Ear & Hearing, 29, 618 – 626.

    Dalla Bella, S., Giguère, J.-F., & Peretz, I. (2009). Singing in congenital amusia. Journal of the

    Acoustical Society of America, 126, 414 – 424.

    Dalla Bella, S., Peretz, I., Rousseau, L. & Gosselin, N. (2001). A developmental study of the

    affective value of tempo and mode in music. Cognition, 80, B1 – B10.

    Drayna, D., Manichaikul, A., de Lange, M., Snieder, H., & Spector, T. (2001). Genetic

    correlates of musical pitch recognition in humans. Science, 291, 1969 – 1972.

    Fetterman, B.L., & Domico, E.H. (2002). Speech recognition in background noise of cochlear

    implant patients. Otolaryngology – Head and Neck Surgery, 126, 257 – 263.

    Firszt, J.B., Holden, L.K., Skinner, M.W., Tobey, E.A., Peterson, A., Gaggl, W., et al. (2004).

    Recognition of speech presented at soft to loud levels by adult cochlear implant recipients

    of three cochlear implant systems. Ear & Hearing, 25, 375 – 387.

    Foxton, J.M., Nandy, R.K., & Griffiths, T.D. (2006). Rhythm deficits in ‘tone deafness.’ Brain

    and Cognition, 62, 24 – 29.

    Fu, Q.-J., Nogaki, G., & Galvin, J.J., III (2005). Auditory training with spectrally shifted speech:

    Implications for cochlear implant patient auditory rehabilitation. Journal for the

    Association of Research in Otolaryngology, 6, 180 – 189.

  • 30

    Galvin, J.J., III, Fu, Q.-J., & Nogaki, G. (2007). Melodic contour identification by cochlear

    implant listeners. Ear & Hearing, 28, 302 – 319.

    Galvin, J.J., III, Fu, Q.-J., & Shannon, R.V. (2009). Melodic contour identification and music

    perception by cochlear implant users. Annals of the New York Academy of Sciences,

    1169, 518 – 533.

    Gfeller, K., Christ, A., Knutson, J.F., Witt, S., Murray, K.T., & Tyler, R.S. (2000). Musical

    backgrounds, listening habits and aesthetic enjoyment of adult cochlear implant

    recipients. Journal of the American Academy of Audiology. 11, 390 – 406.

    Gfeller, K., Christ, A., Knutson, J.F., Woodworth, G., Witt, S. & DeBus, B. (1998). Timbral

    recognition and appraisal by adult cochlear implant users and normal-hearing adults.

    Journal of the American Academy of Audiology, 9, 1 – 19.

    Gfeller, K., Turner, C., Oleson, J., Zhang, X., Gantz, B., Froman, R., et al. (2007). Accuracy of

    cochlear implant recipients on pitch perception, melody recognition, and speech reception

    in noise. Ear & Hearing, 28, 412 – 423.

    Gfeller, K., Witt, S., Stordahl, J., Merh, M., & Woodworth, G. (2000). The effects of training on

    melody recognition and appraisal by adult cochlear implant recipients. Journal of the

    Academy of Rehabilitative Audiology, 33, 115 – 138.

    Gfeller, K., Woodworth, G., Robin, D. A., Witt, S., Knutson, J.F. (1997). Perception of rhythmic

    and sequential pitch patterns by normally hearing adults and adult cochlear implant

    users. Ear & Hearing, 18, 252 – 260.

    Hartmann, R., & Kral, A. (2004). Central Responses to Electrical Stimulation. In F.-G. Zeng,

    A.N. Popper, & R.R. Fay (Eds.), Cochlear Implants: Auditory Prostheses and Electric

    Hearing (pp. 213 – 285). New York: Springer.

    Haskell, T.R. (2009). FLXLab 2.3 [computer software and manual]. Retrieved February 28,

    2010, from http://xlab.sourceforge.net

  • 31

    Hébert, S., & Cuddy, L.L. (2002). Detection of metric structure in auditory figural patterns.

    Perception & Psychophysics, 64, 909 – 918.

    Hermans, D., Clarysse, J., Baeyens, F., & Spruyt, A. (2005). Affect Version 4.0 [Computer

    software]. Retrieved February 4, 2010, from http://www.psy.kuleuven.ac.be/leerpsy/

    affect4. University of Leuven, Belgium.

    Hopyan, T., Schellenberg, E.G., & Dennis, M. (2009). Perception of strong-meter and weak-

    meter rhythms in children with spina bifida meningomyelocele. Journal of the

    International Neuropsychological Society, 15, 521 – 528.

    Hopyan-Misakyan, T. M., Gordon, K. A., Dennis, M., & Papsin, B.C. (2009). Recognition

    of affective speech prosody and facial affect in deaf children with unilateral right

    cochlear implants. Child Neuropsychology, 15, 136 – 146.

    Hunter, P.G., Schellenberg, E.G., & Stalinski, S.M. (submitted). Emotions expressed by music:

    Age and gender difference in identification accuracy and preferences.

    Jones, J.L., Lucker, J., Zalewski, C., Brewer, C., & Drayna, D. (2009). Phonological processing

    in adults with deficits in musical pitch recognition. Journal of Communication Disorders,

    42, 226 – 234.

    Juslin, P.N., & Laukka, P. (2003). Communication of emotions in vocal expression and music

    performance: Different channels, same code? Psychological Bulletin, 129, 770 –814.

    Kang, R., Nimmons, G.L., Drennan, W., Lognion, J., Ruffin, C., Nie, K., et al. (2009).

    Development and validation of the University of Washington Clinical Assessment

    of Music Perception test. Ear & Hearing, 30, 411–418.

    Kong,Y.-Y., Cruz, R., Ackland-Jones, J., & Zeng, F.-G. (2004). Music perception with temporal

    cues in acoustic and electric hearing. Ear & Hearing, 25, 173 – 185.

    McDermott, H.J., & Looi, V. (2004). Perception of complex signals, including musical sounds,

    with cochlear implants. International Congress Series, 1273, 201–204.

  • 32

    Lassaletta, L., Castro, A., Bastarrica, M., Pe´Rez-Mora, R., Madero, R., De Sarria, J., et al.

    (2007). Does music perception have an impact on quality of life following cochlear

    implantation? Acta Oto-Laryngologica, 127, 682 – 686.

    Laukka, P. (2006). Uses of music and psychological well-being among the elderly. Journal of

    Happiness Studies, 8, 215 – 241.

    Leake, P.A., & Rebscher, S.J. (2004). Anatomical Considerations and Long-Term Effects of

    Electrical Stimulation. In F.-G. Zeng, A.N. Popper, & R.R. Fay (Eds.), Cochlear

    Implants: Auditory Prostheses and Electric Hearing (pp. 101 – 148). New York:

    Springer.

    Leal, M.C., Shin, Y.J., Laborde, M.-L., Calmels, M.-N., Verges, S., Lugardon, S., et al. (2003).

    Music Perception in Adult Cochlear Implant Recipients. Acta Oto- Laryngologica, 123,

    826 – 835.

    Loizou, P.C., Poroy, O., & Dormann, M. (2000). The effect of parametric variations of cochlear

    implant processors on speech understanding. Journal of the Acoustical Society of

    America, 108, 790 – 802.

    Looi, V., McDermott, H., McKay, C., & Hickson, L. (2004). Pitch discrimination and melody

    recognition by cochlear implant users. International Congress Series, 1273, 197 – 200.

    Looi, V., McDermott, H., McKay, C., & Hickson, L. (2007). Comparisons of quality ratings for

    music by cochlear implant and hearing aid users. Ear & Hearing, 28, 59S – 61S.

    Looi, V., McDermott, H., McKay, C., & Hickson, L. (2008). The effect of cochlear implantation

    on music perception by adults with usable pre-operative acoustic hearing. International

    Journal of Audiology, 47, 257 – 268.

    Looi, V., & She, J. (2010). Music perception of cochlear implant users: A questionnaire, and its

    implications for a music training program. International Journal of Audiology, 49, 116–

    128.

  • 33

    Meister, H., Landwehr, M., Pyschny, V., Walger, M., & Von Wedel, H. (2009). The perception

    of prosody and speaker gender in normal-hearing listeners and cochlear implant

    recipients. International Journal of Audiology, 48, 38 – 48.

    Moore, R.E., Estis, J., Gordon-Hickey, S., & Watts,C. (2008). Pitch discrimination and pitch

    matching abilities with vocal and nonvocal Stimuli. Journal of Voice, 22, 399 – 407.

    Nakata, T., Trehub, S.E., Mitani, C. and Kanda, Y. (2006). Pitch and timing in the songs of deaf

    children with cochlear implants. Music Perception, 24, 147– 154.

    Nowicki, S., Jr. (2006). Manual for the receptive tests of the Diagnostic Analysis of Nonverbal

    Accuracy 2 (DANVA2). Retrieved on March 28, 2010 from

    http://psychology.emory.edu/ clinical/interpersonal/DANVAmanual03.doc

    Nowicki, S., Jr., & Duke, M. P. (1994). Individual differences in the nonverbal communication

    of affect: The Diagnostic Analysis of Nonverbal Accuracy Scale. Journal of Nonverbal

    Behavior, 18, 9 – 35.

    Olszweski, C., Gfeller, K., Froman, R., Stordahl, J., & Tomblin, B. (2005). Familiar melody

    recognition by children and adults using cochlear implants and normal hearing children.

    Cochlear Implants International, 6, 123 – 140.

    Peretz, I., Champod, A.S., & Hyde, K. (2003). Varieties of Musical Disorders: The Montreal

    Battery of Evaluation of Amusia. New York Academy of Sciences, 999, 58 – 75.

    Pisoni, D.B., & Cleary, M. (2004). Learning, Memory, and Cognitive Processes in Deaf Children

    Following Cochlear Implantation. In F.-G. Zeng, A.N. Popper, & R.R. Fay (Eds.),

    Cochlear Implants: Auditory Prostheses and Electric Hearing (pp. 334 – 376). New

    York: Springer.

    Shannon, R.V., Zeng, F.-G., Wygonski, J., Kamath, V., & Ekelid, M. (1995). Speech recognition

    with primarily temporal cues. Science, 270, 303 – 304.

  • 34

    Spruyt, A., Clarysse, J., Vansteenwegen, D., Baeyens, F., & Hermans, D. (2010). Affect 4.0: A

    free software package for implementing psychological and psychophysiological

    experiments. Experimental Psychology, 57, 36 – 45.

    Stordahl, J. (2002). Song recognition and appraisal: A comparison of children who use cochlear

    implants and normally hearing children. Journal of Music Therapy, 39, 2 – 19.

    Turner, C.W., Reiss, L. & Gantz, B.J. (2008). Combined acoustic and electric hearing:

    Preserving residual acoustic hearing. Hearing Research, 242, 164 – 171.

    Van Dijk, J.E., Van Olphen, A.F., Langereis, M.C., Mens, L.H., Brokx, J.P., & Smoorenburg,

    G.F. (1999). Predictors of cochlear implant performance. Audiology, 38, 109 – 116.

    Vandali, A. E., Whitford, L. A., Plant, K. L., and Clark, G. M. (2000). Speech perception as a

    function of electrical stimulation rate: Using the Nucleus 24 cochlear implant system. Ear

    & Hearing, 21, 608 – 624.

    Veekmans, K., Ressel, L., Mueller, J., Vischer, M., & Brockmeier, S.J. (2009). Comparison of

    music perception in bilateral and unilateral cochlear implant users and normal-hearing

    subjects. Audiololgy and Neurotology, 14, 315 – 326.

    Veillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2007). Happy, sad,

    scary and peaceful musical excerpts for research on emotions. Cognition and Emotion,

    22, 720 – 752.

    Vongpaisal, T., Trehub, S.E., & Schellenberg, E.G. (2006). Song recognition by children and

    adolescents with cochlear implants. Journal of Speech, Language, and Hearing Research,

    49, 1091–1103.

    Vongpaisal, T., Trehub, S.E., & Schellenberg, E.G. (2009). Identification of TV tunes by

    children with cochlear implants. Music Perception, 27, 17–24.

    Wilson, B. S. (2000). Cochlear Implant Technology. In J.K. Niparko, K.I. Kirk, N.K. Mellon,

    A.M. Robbins, D.L. Tucci, & B.S. Wilson (Eds.), Cochlear implants: Principles &

    Practices (pp. 109 – 118). New York: Lippincott, Williams & Wilkins.

  • 35

    Wilson, B.S. (2004). Engineering Design of Cochlear Implants. In F.-G. Zeng, A.N. Popper, &

    R.R. Fay (Eds.), Cochlear Implants: Auditory Prostheses and Electric Hearing (pp. 14 –

    52). New York: Springer.

    Xu, L., Zhou, N., Chen, X., Li, Y., Schultz, H.M., Zhao, X., et al. (2009). Vocal singing by

    prelingually-deafened children with cochlear implants. Hearing Research, 255, 129 –134.

    Zhang, T., Dorman, M.F, & Spahr, A.J. (2010). Information from the voice fundamental

    frequency (F0) region accounts for the majority of the benefit when acoustic stimulation

    is added to electric stimulation. Ear & Hearing, 31, 63 – 69.

  • 36

    Appendix

    Appendix A

    MUSIC & COCHLEAR IMPLANTS QUESTIONNAIRE NAME: DATE: GENDER (M/F): AGE: E-MAIL: TEL: COUNTRY OF BIRTH: FIRST LANGUAGE: HIGHEST EDUCATION ACHIEVED: CAUSE OF HEARING LOSS: AGE AT DIAGNOSIS: DID YOU USE HEARING AIDS PRIOR TO IMPLANT SURGERY? IF SO, FOR HOW LONG? (YEARS): PROGRESSIVE LOSS? (YES/NO): TYPE OF COCHLEAR IMPLANT: PROCESSING STRATEGY: AGE AT SURGERY: ONE IMPLANT OR TWO: IF ONE, HEARING AID IN OPPOSITE EAR? (YES/NO):

  • 37

    MUSIC & COCHLEAR IMPLANTS

    A. HOURS OF MUSIC LISTENING PER WEEK PRIOR TO HEARING LOSS: (1) 0 hours; (2) 1 – 4 hours; (3) 4 – 7 hours; (4) 7 – 10 hours; (5) 10 hours or more. ANSWER: B. HOURS OF MUSIC LISTENING PER WEEK AFTER HEARING LOSS BUT PRIOR TO IMPLANT SURGERY: (1) 0 hours; (2) 1 – 4 hours; (3) 4 – 7 hours; (4) 7 – 10 hours; (5) 10 hours or more. ANSWER: C. HOURS OF MUSIC LISTENING PER WEEK SINCE IMPLANT SURGERY: (1) 0 hours; (2) 1 – 4 hours; (3) 4 – 7 hours; (4) 7 – 10 hours; (5) 10 hours or more. ANSWER: D1. HAVE YOU EVER PLAYED A MUSICAL INSTRUMENT AND / OR SUNG REGULARLY? (YES/NO): D2. IF YES, WHICH ONE(S), AND FOR HOW MANY YEARS DID YOU PLAY AND / OR SING REGULARLY? (e.g., piano, 3 years; voice, 5 years)

    Instrument Years

    D3. IF YOU ARE NO LONGER PLAYING AN INSTRUMENT AND/OR SINGING, HOW LONG HAS IT BEEN SINCE YOU LAST PLAYED AN INSTRUMENT AND/OR SANG REGULARLY? (YEARS):

  • 38

    E1. HAVE YOU EVER TAKEN MUSIC LESSONS? (YES/NO): E2. IF YES, FOR HOW MANY YEARS? (e.g., piano, 3 years; guitar, 5 years):

    Instrument Years E3. IF YOU ARE NO LONGER TAKING LESSONS, HOW LONG HAS IT BEEN SINCE YOUR LAST LESSON? (YEARS): F1. IF YOU PLAYED AN INSTRUMENT AND/OR SANG IN THE PAST, WHAT WAS THE AVERAGE NUMBER OF HOURS A WEEK THAT YOU PLAYED AND/OR SANG PRIOR TO HEARING LOSS? (1) 1 – 3 hours; (2) 3 – 6 hours; (3) 6 – 9 hours; (4) 9 – 12 hours; (5) 12 hours or more. ANSWER: F2. HOW LONG DID YOU MAINTAIN THIS INTENSITY OF PLAYING/SINGING? (YEARS): G1. IF YOU PLAYED AN INSTRUMENT AND/OR SANG IN THE PAST, ON AVERAGE, HOW MANY HOURS A WEEK DID YOU PLAY AND/OR SING AFTER HEARING LOSS BUT PRIOR TO IMPLANT SURGERY? (1) 1 – 3 hours; (2) 3 – 6 hours; (3) 6 – 9 hours; (4) 9 – 12 hours; (5) 12 hours or more. ANSWER: G2. HOW LONG DID YOU MAINTAIN THIS RATE OF PLAYING/SINGING? (YEARS): H1. IF YOU CURRENTLY PLAY AN INSTRUMENT AND/OR SING, HOW MANY HOURS A WEEK DO YOU PLAY OR SING? (1) 1 – 3 hours; (2) 3 – 6 hours; (3) 6 – 9 hours; (4) 9 – 12 hours; (5) 12 hours or more. ANSWER: H2. HOW LONG HAVE YOU MAINTAINED THIS RATE? (YEARS):

  • 39

    I. DID YOU PARTICIPATE IN MUSIC ENSEMBLES (CHOIR, BAND, INFORMAL GROUP, ETC) PRIOR TO YOUR HEARING LOSS? (YES/NO): J. DID YOU PARTICIPATE IN MUSIC ENSEMBLES (CHOIR, BAND, ETC) AFTER HEARING LOSS BUT PRIOR TO IMPLANT SURGERY? (YES/NO): K. DO YOU PARTICIPATE IN MUSIC ENSEMBLES (CHOIR, BAND, ETC) SINCE IMPLANT SURGERY? (YES/NO): L. TYPES OF MUSIC ENJOYED PRIOR TO HEARING LOSS: (e.g., classical, pop, country) M. TYPES OF MUSIC ENJOYED AFTER HEARING LOSS BUT PRIOR TO IMPLANT SURGERY: (e.g., classical, pop, country) N. TYPES OF MUSIC ENJOYED SINCE IMPLANT SURGERY: (e.g., classical, pop, country)

  • 40

    FOR THE FOLLOWING QUESTIONS, PLEASE INDICATE HOW MUCH THE FOLLOWING STATEMENTS APPLY TO YOU. O. I ENJOYED LISTENING TO MUSIC PRIOR TO MY HEARING LOSS. 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree ANSWER: P. I ENJOYED LISTENING TO MUSIC WITH MY HEARING AID(S). 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree ANSWER: Q. I ENJOY LISTENING TO MUSIC WITH MY COCHLEAR IMPLANT(S). 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree ANSWER: R. THE SOUND OF MUSIC AS HEARD WITH MY COCHLEAR IMPLANT(S) IS PLEASANT. 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree ANSWER:

  • 41

    MUSIC & COCHLEAR IMPLANTS List of Familiar Musical Excerpts

    • Please indicate in the table below a list of 10 musical selections that you listen to

    regularly. If possible, select only one track from each album and/or each artist. • If you listen to fewer than 10 musical tracks from different artists or different

    albums, provide whatever information you can in the table below.

    TRACK NAME ARTIST/PERFORMER ALBUM HOW LONG HAVE YOU BEEN FAMILIAR WITH THIS

    SELECTION? EXAMPLE: Clair de lune (from Suite bergamasque)

    Claude Debussy / François-Joel Thiollier

    Debussy: Piano Works Vol. 1

    3 years

    1.

    2.

    3.

    4.

    5.

    6.

    7.

    8.

    9.

    10.

  • 42

    Appendix B

    MUSIC BACKGROUND INFORMATION QUESTIONNAIRE (ADULTS) Name: Gender: F M Student # Date of Birth : Age: Phone # Country of Birth First language Musical Background Have you ever taken private music lessons (instrumental or voice)? Yes No (Circle one) Years since Years playing Years since last Instrument Years of lessons last lessons regularly played regularly

    Have you ever taken music in elementary or high school? Yes No (Circle one) Years since Years since last Instrument Years of lessons last lessons Years playing played regularly

    Have you ever studied music theory? If so, how extensively? (i.e., levels achieved or courses taken). How much ear training have you had? Would you describe yourself as being "tone deaf"? Yes No (Circle one) Do

    you have absolute pitch (perfect pitch)? Yes No (Circle one)

  • 43

    Did you grow up listening to music primarily from Canada, the United States, or England?

    Yes No (Circle one) If not, which country's music did you grow up listening to?

    Please describe your musical activities (singing, playing, dancing, listening, etc.) Regardless of whether you've ever taken music lessons, please rate how "musical" you think you are in relation to the average person.

    extremely unmusical average extremely musical

    1 2 3 4 5 6 7 Please describe your music listening: Type of Music Hours per week listening

  • 44

    Appendix C

    Music and Cochlear Implants Interview

    Please answer these questions with a few sentences or a short paragraph. You can answer in prose or in point-form. You are not obligated to fill the entire space, but you can do so if you wish.

    1. How would you describe the role of music in your life (i.e. is it important, not important, why, etc)? 2. Please describe to us the way you enjoy music on a day-to-day basis (i.e. when and where you listen to and / or play music, how it makes you feel, etc) 3. What do you generally like (or not like) about music? Please specify whether you are referring to playing or listening to music. Feel free to refer to both of these activities in your answer.

  • 45

    Appendix D

    SEMI-STRUCTURED Interview open-ended questions (prompts)

    1. Perhaps you can tell me about the role of music in your life to before your hearing loss. 2. Can you describe your experience with music during the period when you used hearing aids? 3. How has your experience of music changed since getting your cochlear implant? 4. Are you still able to enjoy music? 5. How would you describe the sound of music as heard through your implant? How could you compare it to what you think is heard by someone with normal hearing? 6. Are you still able to enjoy the specific musical pieces that you liked before your hearing loss? 7. Have your musical preferences changed since receiving your implant? 8. When you listen to music, to which part do you primarily listen to? Was this different before receiving your implant?