17
Brain processing of consonance/dissonance in musicians and controls: a hemispheric asymmetry revisited Alice Mado Proverbio, Andrea Orlandi and Francesca Pisanu Milan-Mi Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, piazza dell’Ateneo Nuovo 1, U6 Building, Milan, Italy Keywords: auditory, emotions, event-related potentials, music, perception Edited by John Foxe Received 11 January 2016, revised 28 June 2016, accepted 1 July 2016 Abstract It was investigated to what extent musical expertise influences the auditory processing of harmonicity by recording event-related potentials. Thirty-four participants (18 musicians and 16 controls) were asked to listen to hundreds of chords, differing in their degree of consonance, their complexity (from two to six composing sounds) and their range (distance of two adjacent pitches, from quarter- tones to more than 18 semitone steps). The task consisted of detecting rare targets. An early auditory N1 was observed that was modulated by chord dissonance in both groups. The response was generated in the right medial temporal gyrus (MTG) for consonant chords but in the left MTG for dissonant chords according to swLORETA reconstruction performed. An anterior negativity (N2) was enhanced only in musicians in response to chords featuring quartertones, thus suggesting a greater pitch sensitivity for simultaneous pure tones in the skilled brain. The P300 was affected by the frequency range only in musicians, who also showed a greater sensitiv- ity to sound complexity. A strong left hemispheric specialization for processing quartertones in the left temporal cortex of musicians was observed at N2 level (250350 ms), which was observed on the right side in controls. Additionally, in controls, widespread activ- ity of the right limbic area was associated with listening to close frequencies causing disturbing beats, possibly suggesting a negative aesthetic appreciation for these stimuli. Overall, the data show a finer and more tuned neural representation of pitch intervals in musi- cians, linked to a marked specialization of their left temporal cortex (BA21/38). Introduction Previous studies showed greater brain responses to consonant than dissonant tones regardless of a listeners musical knowledge (Bidel- man & Krishnan, 2009; Bidelman & Heinz, 2011). Here, the extent to which musical expertise inuenced pure tone auditory processing, with reference to their frequency ratios, was investigated. It is well known that the musicians brain can detect subtle differ- ences in pitch and temporal structure of acoustic information more accurately than that of naive controls (Burns & Houtsma, 1999; Trainor et al., 1999; Drake et al., 2000; van Zuijen et al., 2005). These abilities depend on the functional specialization of the pri- mary and secondary auditory cortex (besides motor areas and other areas; e.g. Schlaug et al., 1995; Gaser & Schlaug, 2003) due to pro- longed and intense musical training. In this regard, Aydin et al. (2005) performed quantitative proton MR spectroscopy of the left planum temporale of 10 musicians and in 10 age- and sex-matched control subjects who had no musical training. The difference in N- acetylaspartate (NAA) concentrations between the musicians and the non-musician control subjects was statistically signicant and corre- lated with the total duration of musical training and activity. Those ndings suggest that professional musical activity can cause signi- cant changes in neurometabolite concentrations that might reect the physiological mechanism of use-dependent adaptation in the brains of musicians, especially in the areas devoted to auditory processing. Structural changes in the auditory cortex of musicians were observed by Schlaug et al. (1995) via functional magnetic resonance scans in musicians and non-musicians. The imaging data revealed that the planum temporalis was more lateralized to the left side in musicians than in non-musicians. Similarly, Pantev et al. (1998) conducted a magnetoencephalography (MEG) study that revealed micro populations in the left auditory cortex that processed piano tones were approximately 25% larger in musicians than in control subjects who had never played an instrument. This enlargement was correlated with the age at which the musicians began practicing, and the enlargement did not differ between musicians with absolute or relative pitch. With regard to the processing of simultaneous tones (i.e. chords) reecting only spectral (frequency-based and harmonic) information and not temporal or rhythmic ones, Kuriki et al. (2006) found an enhancement of neural activity in the N1 MEG response of auditory evoked eld potentials in long-term trained musicians, thus reect- ing neuroplastic modication of auditory cortex representations. Vir- tala et al. (2014) presented minor chords and inverted major chords Correspondence: Alice Mado Proverbio, as above. E-mail: [email protected] © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, pp. 117, 2016 doi:10.1111/ejn.13330

Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

Brain processing of consonance/dissonance in musiciansand controls: a hemispheric asymmetry revisited

Alice Mado Proverbio, Andrea Orlandi and Francesca PisanuMilan-Mi Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, piazza dell’Ateneo Nuovo 1, U6Building, Milan, Italy

Keywords: auditory, emotions, event-related potentials, music, perception

Edited by John Foxe

Received 11 January 2016, revised 28 June 2016, accepted 1 July 2016

Abstract

It was investigated to what extent musical expertise influences the auditory processing of harmonicity by recording event-relatedpotentials. Thirty-four participants (18 musicians and 16 controls) were asked to listen to hundreds of chords, differing in their degreeof consonance, their complexity (from two to six composing sounds) and their range (distance of two adjacent pitches, from quarter-tones to more than 18 semitone steps). The task consisted of detecting rare targets. An early auditory N1 was observed that wasmodulated by chord dissonance in both groups. The response was generated in the right medial temporal gyrus (MTG) for consonantchords but in the left MTG for dissonant chords according to swLORETA reconstruction performed. An anterior negativity (N2) wasenhanced only in musicians in response to chords featuring quartertones, thus suggesting a greater pitch sensitivity for simultaneouspure tones in the skilled brain. The P300 was affected by the frequency range only in musicians, who also showed a greater sensitiv-ity to sound complexity. A strong left hemispheric specialization for processing quartertones in the left temporal cortex of musicianswas observed at N2 level (250–350 ms), which was observed on the right side in controls. Additionally, in controls, widespread activ-ity of the right limbic area was associated with listening to close frequencies causing disturbing beats, possibly suggesting a negativeaesthetic appreciation for these stimuli. Overall, the data show a finer and more tuned neural representation of pitch intervals in musi-cians, linked to a marked specialization of their left temporal cortex (BA21/38).

Introduction

Previous studies showed greater brain responses to consonant thandissonant tones regardless of a listener’s musical knowledge (Bidel-man & Krishnan, 2009; Bidelman & Heinz, 2011). Here, the extentto which musical expertise influenced pure tone auditory processing,with reference to their frequency ratios, was investigated.It is well known that the musician’s brain can detect subtle differ-

ences in pitch and temporal structure of acoustic information moreaccurately than that of naive controls (Burns & Houtsma, 1999;Trainor et al., 1999; Drake et al., 2000; van Zuijen et al., 2005).These abilities depend on the functional specialization of the pri-mary and secondary auditory cortex (besides motor areas and otherareas; e.g. Schlaug et al., 1995; Gaser & Schlaug, 2003) due to pro-longed and intense musical training. In this regard, Aydin et al.(2005) performed quantitative proton MR spectroscopy of the leftplanum temporale of 10 musicians and in 10 age- and sex-matchedcontrol subjects who had no musical training. The difference in N-acetylaspartate (NAA) concentrations between the musicians and thenon-musician control subjects was statistically significant and corre-lated with the total duration of musical training and activity. Those

findings suggest that professional musical activity can cause signifi-cant changes in neurometabolite concentrations that might reflect thephysiological mechanism of use-dependent adaptation in the brainsof musicians, especially in the areas devoted to auditory processing.Structural changes in the auditory cortex of musicians wereobserved by Schlaug et al. (1995) via functional magnetic resonancescans in musicians and non-musicians. The imaging data revealedthat the planum temporalis was more lateralized to the left side inmusicians than in non-musicians. Similarly, Pantev et al. (1998)conducted a magnetoencephalography (MEG) study that revealedmicro populations in the left auditory cortex that processed pianotones were approximately 25% larger in musicians than in controlsubjects who had never played an instrument. This enlargement wascorrelated with the age at which the musicians began practicing, andthe enlargement did not differ between musicians with absolute orrelative pitch.With regard to the processing of simultaneous tones (i.e. chords)

reflecting only spectral (frequency-based and harmonic) informationand not temporal or rhythmic ones, Kuriki et al. (2006) found anenhancement of neural activity in the N1 MEG response of auditoryevoked field potentials in long-term trained musicians, thus reflect-ing neuroplastic modification of auditory cortex representations. Vir-tala et al. (2014) presented minor chords and inverted major chordsCorrespondence: Alice Mado Proverbio, as above.

E-mail: [email protected]

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd

European Journal of Neuroscience, pp. 1–17, 2016 doi:10.1111/ejn.13330

Page 2: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

in the context of major chords to musician and non-musician partici-pants in a passive listening task and an active discrimination task;musicians outperformed non-musicians in the discrimination task.Change-related mismatch negativity (MMN) was evoked with minorand inverted major chords in musicians only, while both groupsshowed a decreased N1 in response to minor compared to majorchords. Likewise, Kung et al. (2014) found much larger frontalanterior negativities (and smaller P2s) in response to tritones (dishar-monic interval) than to perfect fifth (consonant chord) intervals onlyin musicians and not in controls. In that study, which involved anactive discrimination task, musicians averaged a 95% accuracy inthe (perfect fifths) consonant discrimination and a 94% accuracy inthe (triton) dissonant discrimination, whereas non-musicians showedlower accuracies when judging the consonance of perfect fifths(49%) and dissonance of tritons (53%). These data suggest directassociations between the amplitude of anterior negativity, dissonanceand accuracy in explicit discrimination. Indeed, data reported in theliterature would indicate a much greater ability to distinguish disso-nant from consonant chords in musicians than controls (Schoen &Besson, 2005; Minati et al., 2009).Several event-related potentials (ERPs) studies have demonstrated a

modulation in the amplitude of auditory N2 component as a functionof chord consonance as opposed to dissonance (Regnault et al., 2001;Itoh et al., 2003; Schoen et al., 2005; Kung et al., 2014). For exam-ple, Minati et al. (2009) recorded ERPs to four-note chords and founda much larger N2 to dissonant than consonant chords. The authorsinterpreted the N2 behaviour as indexing stimulus categorization pro-cesses and rule violation detection. Indeed the auditory N2, typicallyevoked within the 180–325 time window, is thought to result from adeviation in form or context of a prevailing stimulus (Patel & Azzam,2005). More recently Bailes et al. (2015) compared neural processingof two-note chords comprising twelve-tone equal-tempered (TET)intervals (consonant and dissonant) or microtonal (quartertone) inter-vals in musicians and na€ıve listeners. They found that for musiciansthe P2-N2 complex corresponding to the 204–300 ms windowshowed lesser positivity (i.e. larger N2s) for microtonal intervals thanfor the 12-TET intervals, while dissonant TET intervals were not dis-criminated from consonant ones. Although the electrophysiologicalliterature is somewhat conflicting, as a whole, it shows an increase innegativity at N2 level for less consonant chords, and a reverse patternof results for the P3 response.In general, perceptual dissonance in the auditory modality has been

ascribed to the fact that dissonant chords contain frequency compo-nents that are too closely spaced to be resolved by the cochlea (Plomp& Levelt, 1965; Hutchinson & Knopoff, 1978). Two harmonics closein frequency (e.g. a second minor interval) shift in and out of phaseover time, producing an interaction that oscillates, so that the ampli-tude of the combined physical waveform thus alternately waxes andwanes (Cousineau et al., 2012). These amplitude modulations arecalled ‘beats’ and result in an unpleasant sensation defined as ‘rough-ness’. Furthermore, if the partials are close enough they excite thesame set of auditory fibres, amplitude modulations are directly observ-able in the response of the auditory nerve: the neural signal is noisyand the information delivered is not sufficiently clear for frequencyrecognition. Indeed, each neuron is sensitive to a frequency band-width, and although the neuron responds with the maximum spike rateto a preferred frequency, it still responds to similar frequencies withinthe preferred bandwidth. Pitches too close in frequency therefore stim-ulate the same neural fibres within the cochlea and along the neuralauditory processing pathway.A second acoustic property also differentiates consonance and dis-

sonance. The component frequencies of the notes of consonant

chords (which share superior harmonics) combine to produce anaggregate spectrum that is typically harmonic, resembling the spec-trum of a single sound that is recognized as a unitary object by theauditory cortex. This leads to the positive sensation linked to listen-ing to harmonic vs. disharmonic chords. Finally, the importance ofharmonics in tone perception is supported by auditory neurobiology(Bowling & Purves, 2015). Neurophysiological recordings in mon-keys show that some neurons in the primary auditory cortex are dri-ven by tones with fundamentals at the frequency to which anauditory neuron is most sensitive as well as integer multiples andratios of the same frequency (Kadia & Wang, 2003). Indeed, inregions bordering primary auditory cortex, some neurons respond toboth isolated fundamental frequencies and their associated harmonicseries (Bendor & Wang, 2005).Along with the discussion on the nature of auditory consonance

and dissonance, many researchers have addressed the issue ofwhether the neural processing of sound harmonic properties dependson musical experience (either musical training or simply culturalexposure to music). Similarly, researchers have investigated whetherthe typical human preference for consonant sounds depends on thespecific neurobiological hardware devoted to sound processing.According to Bowling & Purves (2015), sensitivity to harmonicstimuli is an organizational principle of the auditory cortex in pri-mates in which the connections of at least some auditory neuronsare determined by the harmonics of the frequency they best respondto, which is the frequency spectrum of their vocalization. Interest-ingly, converging studies carried out in human infants (Zentner &Kagan, 1998; Perani et al., 2010; Virtala et al., 2013) as well as inprimates (Izumi, 2000; Fishman et al., 2001; Sugimoto et al.,2010), rodents, and birds (Hulse et al., 1995; Crespo-Bojorque &Toro, 2014) show the existence of a genetically inherited preferencefor consonant chords in most species and all human cultures (Butler& Daston, 1968).As for humans, Bidelman & Krishnan (2011) provided direct evi-

dence that harmonic preference exists at subcortical stages of audition(acoustic nerve and brainstem). In this study, brainstem frequency-fol-lowing responses (FFRs) were measured in response to four prototypi-cal musical triads. Pitch salience computed from FFRs correctlypredicted the ordering of triadic harmony stipulated by music theory,from the more consonant to the more dissonant (i.e. major >minor ≫ diminished > augmented) (Johnson-Laird et al., 2012).This study aimed at investigating the time course and the neuro-

physiological bases of the sensation of perceptual consonance anddissonance, particularly in the skilled and naive brain. For this pur-pose, 200 different pure tones chords (half-consonant and half-disso-nant according to harmony rules) were synthesized. Half of thestimuli were composed of two to three tones, whereas the remaininghalf was composed of up to six tones. To test the role of frequencyproximities (and beats) in the perception of dissonance, the proxim-ity of the composing frequency was modulated so that the maximaldistance between two pitches was a minor third, that is, three semi-tones (the minimum being a quartertone) in half of the stimuli,whereas the maximal distance was 1 octave plus a tritone, that is,18 semitone steps, in the other half. We assumed that, due to beats,tones closer than three semitone steps would stimulate overlappingneural fibres, thus creating a confused neural signal and a sensationof perceptual roughness. It was also expected that the ability to dis-criminate frequency ratios would be more sophisticated in profes-sional musicians because of their lower Just Noticeable Difference(JND) threshold for pitch discrimination, thus leading to a finer andmore tuned discriminative response to sound numerosity, harmonic-ity and frequency closeness. In addition, we also expected to find

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

2 A. M. Proverbio et al.

Page 3: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

larger neural responses to consonant than dissonant sounds in musi-cians than controls. The minimal frequency distance (50 cents equalto a quartertone) used in this study was largely above the humanthreshold, which was also true for controls. Indeed, the JND rangesfrom approximately 3 Hz (6 cents) for pure tones to 1 Hz (2 cents)for complex sounds for frequencies below 500 Hz, to about % 5 Hz(10 cents) for pure tones above 1000 Hz (Kollmeier et al., 2008).However, it should be pointed out that the above thresholds refer tothe discrimination of sequential pure tones, while they might behigher for simultaneous pure tones because of their tendency to fuseperceptually (Moore & Gockel, 2011). This has been empiricallydemonstrated in the validation test described later in the text.On the basis of the available literature it was hypothesized to find

enhanced anterior N2 responses to dissonant vs. consonant chords,in interaction with musical expertise, and a finer sensitivity to chordharmonicity in the musicians’ brain (as reflected by significant dif-ferences in the amplitude of consonant vs. dissonant chords, as afunction of their proximity and complexity, both at perceptual (N1,N2) and cognitive level (P3 response).

Material and methods

Participants

Thirty-four right-handed young adult subjects (18 musicians and 16non-musicians) participated in this study. All participants had aca-demic degrees of the first or second level. The musicians includednine women and nine men aged between 19 and 28 years(M = 23.17), with an average of approximately 13 years of musicalstudy (please see Table 1 for additional information about the musi-cians’ musical studies). The control group consisted of eight womenand eight men aged between 18 and 27 years (M = 22.25). Controlshad no musical education and no specific interest for music.All participants had absolutely normal hearing and normal (or

corrected to normal) vision, according to their self-report. They alsoreported no history of neurological illness or drug abuse. Theirhandedness was assessed by the Italian version of the EdinburghHandedness Inventory, which is a laterality preference questionnairethat reported strong right-handedness and right ocular dominance inall participants. Data from all participants were included in all analy-ses. Experiments were conducted with the understanding and written

consent of each participant according to the Declaration of Helsinki(BMJ 1991, 302:1194) with approval from the Ethical Committee ofthe University of Milano-Bicocca and in compliance with APA ethi-cal standards for the treatment of human volunteers (1992, AmericanPsychological Association). The University students obtained credits(CFU) for their participation.

Stimuli

Stimuli consisted of 200 chords consisting of pure (sinusoidal)tones, 100 dissonant and 100 consonant ones. For each stimulusclass, half of the chords (50 consonant and 50 dissonant) were madeof two to three pure tones (named ‘chords with few sounds’, orbriefly, ‘few’) and the remaining (50 consonant and 50 dissonant)were made of 5–6 sounds (named ‘chords with many sounds’, orbriefly, ‘many’). Within the dissonant chord class, two additionalstimuli categories were used, which included chords that were madeof tones near in frequency (the distance between the lowest and thehighest notes being three semitone steps) named ‘near’, and chordsmade of tones far in frequency (the distance between the lowest andthe highest pitch being at least 18 semitone steps) named ‘far’ (inthe case of only two composing tones, the smallest interval was aminor ninth, i.e. 13 semitones).These categories were created to test: (i) the role of co-stimulation

of overlapping or adjacent fibres in producing a dissonant negativesensation (assessed by contrasting ‘near’ vs. ‘far’ in dissonantchords), as near chords included harmonics differing in quartertones(50 cents), and (ii) to test the effect of musical expertise in fre-quency tuning capability, by investigating a possible interactionbetween group and tone proximity. The few-many dimension wasmanipulated to observe the effect of auditory complexity and soundintensity/energy.All auditory stimuli were created using Logic-Pro 2013 software

for Apple. Sounds were pure (single sinewave) with midi timbre(with no harmonics). Stimuli were balanced across classes for typesof intervals and tonality. All minor and major tonalities were used(about twice) to create chords, and equally for the various stimuluscategories. Dissonant chords (not reflecting a specific tonality) werebalanced for the lowest tone of the chord, which acted as the bass,from the ‘harmonic point of view’. All tones of the chromatic scalewere used (approximately twice) to act as the bass of the dissonantchords, and equally for the various stimulus categories (some exam-ple of stimuli are provided in Fig. 1). Stimulus categories allowedthe following contrasts:

1 100 consonant vs. 100 dissonant chords (regardless ofcomplexity)

2 100 few vs. 100 many chords (regardless of dissonance)3 50 dissonant near, 50 dissonant far chords (regardless of

complexity)

Sound intensity was equalized across low vs. high pitches so thatall sinusoids had the same (physical) intensity. The individual notes ofa chord were balanced for intensity. The difference in sound volumebetween chords with few and many notes was preserved. The overallplayback level was 42.49 dB (min = 24.7; max = 59.7 dB), accord-ing to sound level meter PCE-999 (resolution = 0.1 dB). All soundslasted 3 s and reached their maximum intensity in the first 100 ms.Sound intensity decreased gradually to silence in the last second (seeFig. 2, top row). Fifteen different 3-note arpeggios created with miditimbre of harp were used as targets for the experimental session. Allarpeggios were ascendant, in major tonality and lasted 1 s. Theirintensity was matched with that of the non-target stimuli.

Table 1. Age, sex, years of musical studies (YMS) and conservatorydegrees of musicians

Age YMS Academic degree Instrument Sex

28 20 2nd degree laureate Violin F24 16 Diploma V.O. Clarinet F28 12 Diploma V.O. Viola F19 12 1st degree diploma Cello F24 9 1st degree diploma Cello M23 10 Diploma V.O. Traverse flute F22 11 1st degree diploma Percussion F19 14 1st degree diploma Violin F21 11 1st degree diploma Clarinet M22 11 Diploma V.O. Clarinet M20 12 1st degree diploma Violin F25 17 2nd degree laureate Saxophone F24 18 2nd degree laureate Violin M23 10 Diploma V.O. Clarinet M27 10 1st degree diploma Piano M20 10 1st degree diploma Horn M24 12 1st degree diploma Clarinet M24 16 1st degree diploma Piano M

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 3

Page 4: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

Stimulus validation

Pure tones were used because their waveforms consist of a singlefrequency. However, because they tend to fuse perceptually whenpresented simultaneously and their pitch is processed with greaterdifficulty because they have no harmonics (Moore & Gockel, 2011),a validation test aimed at testing the discriminability of single tonesin a chord was performed in an independent group of skilled musi-cians.Eighteen musicians (nine men and nine women) ranging in age

between 19 and 58 years (mean age = 31. 4 years) participated inthe stimulus validation. All had academic degrees of the first or sec-ond level and were professional musicians (for study or work activi-ties). Their onset age of acquisition of musical abilities variedbetween 4 and 15 years (mean age = 8), and they had been practic-ing for an average of 23.3 years.Musicians were instructed to listen to each clip through a set of

headphones for a maximum of three times. After listening, they hadto decide how many sounds they perceived knowing that each chordmight be formed by two to six sounds. A Pearson’s correlation(Pearson’s Rho) was computed between the averaged estimatednumber and the real number of partials of which chords were madeup. In both cases, the two measures positively correlated (P < 0.05):the correlation was r = 0.52 for dissonant chords and r = 0.71 forconsonant chords (see Fig. 3). The slightly lower correlation for dis-sonant sounds can be explained by the fact that dissonant chordsclose in frequency presented microtone intervals between the com-posing sounds (the shortest interval being a quarter of tone, i.e. 50cents). Although there was a significant correlation between the esti-mated number and the real number of partials composing the chords,the listeners tended to underestimate partials numerosity, especiallyfor very complex chords (4–6 units). The underestimation increasedas the number of partials increased (from 0.5 to 1.5 units).

Procedure

Participants sat comfortably on a chair located inside an electri-cally shielded and anechoic chamber for electroencephalogram

(EEG) testing. They faced (through a glass window) a PC screenlocated 120 cm from their eyes. A bright visual stimulus, that isthe xylophone depicted in Fig. 2 (middle row), was permanentlyprojected on the screen during the experimental sequences to pre-vent the generation of artifactual alpha EEG waves due to a lackof visual stimulation. Participants were asked to maintain fixationof the centre of the screen and avoid any eye or body movementsduring EEG recording. Participants wore a set of headphones(Sennheiser 202 model) for auditory stimulation. Stimuli were pre-sented randomly mixed for 3000 ms with an ISI of1500 � 200 ms (please see Fig. 2, bottom row). Stimuli of thefour classes were equally represented in eight sequences of 25stimuli, intermixed by couple minutes pauses. The whole recordingsession lasted about 45 min. In every stimulus sequence,participants were presented two or three targets (chords with aharp timber).The experimental session was preceded by a training phase to

help familiarize subjects with the experimental setting. The trainingphase included the presentation of chords not used in the experimentintermixed with the target sounds. Participants were given writteninstructions on task requirements, which consisted of pressing asaccurately and quickly as possible a joypad (with the right or leftindex finger) whenever they perceived the target stimulus. Theresponse hand order was counterbalanced across sequences. Stimu-lus presentation was randomized and counterbalanced across sub-jects. Each sequence began with the presentation of the words‘Ready’, ‘Set’ and then ‘Go’ on the screen, written in capital letters,and ended with an image of the word ‘Thanks’ after each of thedifferent sequences.

Electroencephalogram recording and analysis

Electroencephalogram data were continuously recorded from 128scalp sites according to the 10–5 International System (Oostenveld& Praamstra, 2001) at a sampling rate of 512 Hz. Horizontal andvertical eye movements were also recorded. Averaged ears servedas the reference lead. The EEG and electrooculogram were

Fig. 1. Some example of stimuli for each of the various categories. Cons, consonant chords; dis, dissonant chords.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

4 A. M. Proverbio et al.

Page 5: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

amplified with a half-amplitude bandpass of 0.016–70 Hz. Elec-trode impedance was maintained below 5 kΩ. EEG epochs weresynchronized with the onset of stimulus presentation. Computerizedartefact rejection was performed prior to averaging to discardepochs in which eye movements, blinks, excessive muscle poten-tials or amplifier blocking occurred. The artefact rejection criterionwas a peak-to-peak amplitude exceeding � 50 lV, which resultedin a rejection rate of � 5%. Evoked-response potentials (ERPs)from �100 ms to 3000 ms after stimulus onset were averaged off-line. ERP components were identified and measured, with respectto the average baseline voltage over the interval from �100 to0 ms, at scalp sites and latency in which they reached the maxi-mum amplitude.

swLORETA source localization

For each group of participants, low-resolution electromagnetic tomo-graphies (LORETAs) were performed on the ERPs to consonant anddissonant chords in the N1 latency range (100–160 ms) and on dif-ference-waves obtained by subtracting ERPs evoked with close fre-quencies from those evoked with far frequencies between 250–350 ms. LORETA (Pasqual-Marqui et al., 1994) is a discrete linearsolution to the inverse EEG problem and corresponds to the 3D dis-tribution of neuronal electrical activity that has a maximal similar(i.e. maximally synchronized) orientation and strength betweenneighbouring neuronal populations (represented by adjacent voxels).In this study, an improved version of the standardized weightedLORETA was used (Palmero-Soler et al., 2007). This version,

Fig. 2. (Top) Temporal distribution of average sound energy/intensity for chords composed by few (top) vs. many (bottom) sounds. It can also be seenhow intensity was attenuated at the beginning of the 3rd second. (Middle) Colored background stimulus used to provide a continued and pleasant visualstimulation during auditory recording to prevent excessive alpha artefacts. For this aim, an xylophone (musical instrument) is depicted with bright colours,stripes of varying spatial frequencies and orientations, and varying shapes and sizes. (Bottom) Sketchy description of experimental procedure, stimulusduration and ISI.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 5

Page 6: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

called swLORETA, incorporates a singular value decomposition-based lead field-weighting method. The source space propertiesincluded a grid spacing (the distance between two calculationpoints) of five points (mm) and an estimated signal-to-noise ratio(which defines the regularization; a higher value indicates less regu-larization and therefore less blurred results) of three. The use of avalue of 3–4 for the computation of SNR in the Tikhonov’s regular-ization produces superior accuracy of the solutions for any inverseproblems assessed. swLORETA was performed on the group data(grand-averaged data) to identify statistically significant electromag-netic dipoles (P < 0.05) in which larger magnitudes correlated withmore significant activation. The data were automatically re-refer-enced to the average reference as part of the LORETA analysis. Arealistic boundary element model (BEM) was derived from a T1-weighted 3D MRI data set through segmentation of the brain tissue.This BEM model consisted of one homogeneous compartment com-prising 3446 vertices and 6888 triangles. Advanced Source Analysis(ASA) employs a realistic head model of three layers (scalp, skulland brain) created using the BEM. This realistic head model com-prises a set of irregularly shaped boundaries and the conductivityvalues for the compartments between them. Each boundary isapproximated by a number of points, which are interconnected byplane triangles. The triangulation leads to a more or less evenly dis-tributed mesh of triangles as a function of the chosen grid value. Asmaller value for the grid spacing results in finer meshes and viceversa. With the aforementioned realistic head model of three layers,the segmentation is assumed to include current generators of brainvolume, including both grey and white matter. Scalp, skull, andbrain region conductivities were assumed to be 0.33, 0.0042 and0.33 respectively (Zanow & Knosche, 2004). The source reconstruc-tion solutions were projected onto the 3D MRI of the Collins brain,which was provided by the Montreal Neurological Institute. Theprobabilities of source activation based on Fisher’s F-test were pro-vided for each independent EEG source, the values of which areindicated in a so-called ‘unit’ scale (the larger, the more significant).Both the segmentation and generation of the head model were

performed using the ASA software program Advanced Neuro Tech-nology (ANT, Enschede, the Netherlands).

Data analysis

ERPs were separately averaged for Consonant vs. Dissonant chords(N = 100 vs. N = 100). Chords were made of Few (2–3, N = 100)or Many (5–6, N = 100) composing sounds. For dissonant sounds,chords were made of near composing frequencies that were within1.5 tone (N = 50), or they were made of Far composing sounds thatwere at least 1.5 octave (N = 50) frequencies.Event-related potential components elicited within the first second

of stimulation were identified and measured where and when theyreached their maximum amplitude at scalp surface.The peak amplitude of central N1 (on Consonant/Dissonant ERP

averages) was measured at central and frontal central sites (C1, C2,FCC1h, FCC2h) within the 110–170 ms time window. The timewindow was centred around the latency experimentally found (i.e.~ 140 � 30). The mean area amplitude of the anterior P300 (onConsonant/Dissonant ERP averages) was measured at frontal centralsites (FFC1 h and FFC2 h) within the 300–400 time window. Thisresponse, also called P3a, is thought to reflect the automatic orient-ing towards a no-go stimuli with a frontal distribution (Borchardet al., 2015). It was expected to be much earlier that the parietalP300 to targets (also known as P3b), thought to reflect voluntaryattention and attentional selection. The mean area amplitude of cen-tral P300 (on Few/Many ERP averages) was measured at the centraland frontal central sites (C1-C2, FCC1h-FCC2h) within the 300–400 ms time window. The mean area amplitude of N2 (on Near/FarDissonant averages) was measured at the FPz site within the 250–350 ms time window. The mean area amplitude of central P300 (onNear/Far Dissonant averages) was measured at the CPz site withinthe 250–350 ms time window.Individual measures of each ERP component (in lV) from the 12

musicians and 12 controls were analysed with repeated multifactorialANOVAs, the remaining data were discarded for excessive EEG or

Fig. 3. Results of the validation test. Pearson’s correlations between real numbers of pure sounds composing chords and perceived numerosity for dissonant(left) and consonant (right) chords.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

6 A. M. Proverbio et al.

Page 7: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

ocular artefacts. The ANOVA included for all ERP components thebetween-group factor of Expertise (musicians, vs. controls) and twowithin-group factors, Electrode (depending on the ERP componentof interest) and Hemisphere (left, right). For the N1 and anteriorP300 components, the factor Consonance (consonance, dissonance)was included. For the central P300, the factors Consonance (conso-nance, dissonance), and Complexity (few, many composing sounds)were included. For N2 and central P300, the factor of frequencyProximity (near, far) was included. Duncan post hoc comparisonsamong means were performed.

Results

Response time to rare targets (harp arpeggios, 10% of probability)was on average 593.7 ms (Musicians: 588.7 ms (SE = 32.4 ms);Non-musicians = 598.8 ms (SE = 29 ms). They did not differ statis-tically (P = 0.82) across groups.The ERP waveforms recorded to rare targets and consonant

chords in the two groups of participants are displayed in Fig. 4 fora comparison. It is possible to appreciate how selective attentiongreatly enhanced both perceptual and cognitive components of audi-tory ERPs. However, due to the huge difference in stimulus proba-bility and unequal number of averages x trials, the two conditionswere not statistically analysed. Highlighted are two late positivepotentials: the anterior P300 to non-targets (P3a according to Bor-chard et al., 2015), larger in musicians than in controls, and parietalP3b, indexing voluntary attention to targets (Polich, 2007). Thelatencies of anterior P300 at Fcz were as follows: Musi-cians = 330 ms for consonant chords, 342 ms for dissonant chords;Non-Musicians = 318 ms for consonant chords; 320 ms for disso-nant chords. The P3b to targets was much later in latency, as

predicted by current literature. The latency of P3b at Pz site wasabout 484 ms (musicians: 443 ms, non-musicians: 488 ms).Figure 5 shows the grand-averaged ERP waveforms recorded in

the two groups as a function of chord consonance. Because stimuliwere accurately balanced across classes for perceptual characteristics(intensity, number of composing sounds, pitch, intervals, tonalrange, timbre), except for consonance/dissonance, ERP waveformsalmost overlapped at posterior areas, but they showed two specificeffects of consonance. These effects were observed at an early (N1)and later (N2, P300) processing stage at anterior scalp areas. P300and later slow waves were also largely modulated by musical exper-tise.

Auditory N1 (138 ms): effect of consonance

The auditory N1 peaked at approximately 135 ms in musicians andat 141 ms in Controls, but with no significant difference in latency.The ANOVA performed on N1 peak amplitude values showed (forboth groups) a significant interaction between Consonance x Hemi-sphere (F1,22 = 5.6; P < 0.03), indicating that N1 amplitudes weremuch larger in response to consonant than dissonant stimuli, withsignificantly larger amplitudes recorded over the left hemisphere(Dissonant: RH = �2.21, SE 0.37; LH = �1.99, SE = 0.35;P < 0.05; Consonant: LH = �2.50, SE = 0.38; RH = �2.51,SE = 0.4 lV). Overall N1 was larger over the right than left hemi-sphere (P < 0.01) as visible from the topographical maps of Fig. 6.To identify the neural source of the N1 responses to chords, two

swLORETAs were applied to the grand-averaged waveformsrecorded in the two groups of participants as a function of chordconsonance (or lack of it) in the 110–160 latency range. Table 2shows a list of electromagnetic dipoles at the scalp surface that were

Fig. 4. Grand-average ERPs recorded at anterior-frontal, central, central/parietal and parietal midline in controls (C) and Musicians (M). ERPs to rare targets(harp arpeggios) are in red while ERPs to consonant non-target chords are in blue (musicians) and black (controls).

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 7

Page 8: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

the sources of N1 responses to chords. In musicians, brain activitywas much stronger to consonant than to dissonant chords. Activityin the right auditory cortex was involved with consonant chords,whereas the left auditory cortex was involved with dissonant chords(see Fig. 7), along with the thalamus and regions representing musi-cal affective properties (limbic and cingulate cortex). In controls,there was a much weaker effect of chord consonance, but a similarpattern of hemispheric asymmetry for sound processing wasobserved (a left vs. right temporal engagement).

Anterior P300 (300–400 ms): effect of consonance

The ANOVA performed with the P300 amplitude values recorded atfrontal central sites (also known as P3a) showed a significant effectof Expertise (F1,22 = 6.0; P < 0.025), with larger P300 responsesrecorded in Musicians (1.89 lV, SE = 0.48) than in Controls(0.26 lV, SE = 0.46), which is evident in Fig. 8. The Consonancefactor also was significant (F1,22 = 5.6; P < 0.03), indicating largerP300 amplitudes in response to Consonant chords (1.40 lV,SE = 0.33) than Dissonant chords (0.75 lV, SE = 0.39). A simple-effects analysis performed on the Consonance x Group interaction(F1,11 = 10.7; P < 0.0075) showed a significant effect for Conso-nance in the Musician group, with a larger P300 evoked in responseto consonant chords (2.39 lV, SE = 0.48) than to dissonant chordsin musicians (1.39 lV, (SE = 0.56) but not for controls (conso-nant = 0.41 lV, SE = 0.48; dissonant = 0.11 lV, SE = 0.53).

Central P300 (300–400 ms): effect of sound complexity

The ANOVA performed with the P300 amplitude values recorded atcentral sites in response to chords made of few (2–3) or many (5–6)composing tones revealed significant the effect of Complexity(F1,22 = 12.25; P < 0.002), showing larger P300 amplitudes evokedin response to louder ‘Many’ stimuli (2.03 lV, SE = 1) than tosofter ‘Few’ stimuli (0.85 lV, SE = 0.87), in both groups (see mapsand waveforms in Fig. 9). The ANOVA also revealed a significantmain effect of group (F1,22 = 4.83, P < 0.035), with largerresponses recorded in musicians (2.07 lV; SE = 0.41) than in con-trols (0.81 lV; SE = 0.40). The interaction of Group x Complexityx Electrode also was significant (F1,22 = 4.58; P < 0.05). Relativepost hoc comparisons showed that the group differences were muchlarger and significant (P < 0.05) in response to many sounds (atfrontocentral sites) than those for few stimuli. The mean and SEvalues for this interaction are presented in Fig. 10.

N2 (250–350 ms): effect of frequency proximity

For dissonant stimuli, an ANOVA was performed on the mean N2 val-ues recorded at FPz in response to near tones (with a frequency dis-tance between the higher and the lower pitch of the chord inferior to1.5 tone) vs. far tones (with a frequency distance of at least 1.5octaves between the higher and the lower chord pitch). Figure 11Aand B shows the grand-averaged ERP waveforms recorded in the

Fig. 5. Grand-average ERPs recorded at the left and right anterior, central and occipital scalp sites in both participant groups as a function of chord conso-nance/dissonance.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

8 A. M. Proverbio et al.

Page 9: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

two groups as a function of stimulus consonance and frequencyproximity. The ANOVA revealed a significant effect for the factor Fre-quency proximity (F1,22 = 14.57; P < 0.011), showing larger N2amplitudes in response to Near (�1.95 lV, SE = 0.49 than Far(�0.44 lV, SE = 0.45) dissonant stimuli. In addition, a significantinteraction of Proximity x Group (F1,22 = 4.20; P = 0.05) indicateda significant effect for the factor Proximity (Near = �2.63 lV,SE = 0.69; Far = �0.3 lV, SE = 0.64), but only for the musiciansgroup (P < 0.002). N2 amplitude did not differ (P = 0.6) as a func-tion of frequency proximity in the controls (Near = �1.28 lV,SE = 0.7; Far = �0.57 lV (SE = 0.64).

Central P300 (250–350 ms): effect of frequency proximity

The ANOVA performed on the P300 amplitude values recorded at theCPz site showed a significant effect for the factor Frequency prox-imity (F1,22 = 32; P < 0.00001), showing larger P300 amplitudes inresponse to Far (2.17 lV, SE = 0.32) than to Near stimuli(0.39 lV, SE = 0.34). The factor Expertise also was significant(F1,22 = 4.12; P < 0.05), indicating larger P300s evoked in

musicians (1.87 lV, SE = 0.41) than in controls (0.69 lV,SE = 0.41). The factor Expertise showed a significant interactionwith the factor frequency proximity (F1,22 = 4.4; P < 0.05). Rela-tive post hoc comparisons indicated a significant difference betweenP300s evoked in response to Near and Far frequency stimuli inmusicians (P < 0.0002; Far = 3.1 lV, SE = 0.45; Close = 0.65 lV,SE = 0.48) and with a tendency for significance in controls(P = 0.089; Far = 1.248 lV, SE = 0.45; Close = 0.13 lV,SE = 0.48). This difference in sensitivity is evident in the wave-forms presented in Fig. 11A and B.To identify the neural sources of N2 responses to chords whose

composing frequencies included fractions of tone, two differentwaveforms were computed (one per group) by subtracting ERPsevoked in response to near frequencies from ERPS evoked inresponse to far frequencies. Two swLORETAs were then applied tothe two waveforms in the 250–350 latency range, which corre-sponded to earliest N2 time window (see Fig. 12). Table 3 shows alist of electromagnetic dipoles identifying the N2 voltage sourcesrecorded in response to chords with quartertones. In musicians, brainactivity was much stronger in magnitude compared to that of con-trols, and the sources included the left auditory temporal cortex andaffective brain regions possibly supporting the psychological andaesthetical appreciation of musical information, namely the rightorbitofrontal cortex (BA11), left anterior cingulate, left posterior cin-gulate cortices and left uncus. In controls, processing of quartertoneswas associated with activation of the temporal regions bilaterallyand both limbic and frontal areas (medial frontal, anterior cingulate,uncus) on the right side, possibly indicating a more negativepsychological appreciation.

Discussion

The analysis of ERP amplitudes revealed important group differ-ences that reflect the manner in which skilled vs. na€ıve brains pro-cess sound frequency of pure tone chords. The numerosity ofcomposing sounds (chord complexity) had a stronger effect in themusicians than in controls, with the largest group difference associ-ated with the processing of more complex chords, regardless of theirconsonance. This effect was indexed by the amplitude of the fronto-central P300 response, which was therefore sensitive to stimuluscomplexity and perceptual characteristics, such as intensity (multi-tonal chords were louder than bichords). Since the processing ofsound intensity it is not known be modulated by musical expertise,we hypothesize that the increased frontocentral P300 in response tocomplex than simple chords in musicians might reflect a moresophisticated tonal representation of chords in working memory(George & Coch, 2011; Sepp€anen et al., 2012).The effect of frequency proximity of composing tones in disso-

nant chords was significant only in musicians for both N2 and P300components. This effect of musical expertise on musical chord pro-cessing fully agrees with previous neuroscientific literature (e.g. Vir-tala et al., 2012, 2014; Kung et al. (2014); Schoen & Besson, 2005;Minati et al., 2009).In this study, the auditory N1 representing the first cortical

response to sounds (N€a€at€anen & Picton, 1987), was larger at fron-tocentral sites (N€a€at€anen et al., 1988) and in response to conso-nant than to dissonant chords in both groups, thus suggesting thatthe universality of sensory consonance may be an emergent prop-erty of the nervous system. This difference was significantly largerover the left than right hemisphere, possibly indicating a hemi-spheric specialization for processing the frequency of pure tonechords. Notwithstanding a consonance effect was present in both

Fig. 6. (Top) ERPs recorded at the left central sites indicating the modula-tion of the auditory N1 component. (Bottom) Isocolor voltage topographicmap of N1 recorded in musicians and controls within the 110–170 ms timewindow, as a function of chord harmonicity (consonance/dissonance).

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 9

Page 10: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

groups, it was macroscopically greater in musicians than in con-trols, as shown by the ERPs and source reconstruction data. Theanalysis of swLORETA showed that the response in musicianswas much stronger to consonant than dissonant chords. This anal-ysis also revealed that the activity of the right auditory cortexwas associated with consonant chords, whereas activity of the leftauditory cortex was associated with dissonant chords, along withthalamic, limbic and cingulate regions representing musical affec-tive properties (BA23/24) (Blood et al., 1999; Koelsch, 2005; Sel& Calvo-Merino, 2013). Interestingly, in both groups, the leftuncus was activated during perception of consonant but not disso-nant chords for the N1 response, possibly suggesting a positiveemotional connotation (Park et al., 2009) of consonance perceptualexperience.In our study N1 peaked at approximately 135–140 ms. Its rela-

tively late latency (normally ranging between 50 and 150 ms(N€a€at€anen & Picton, 1987), but usually peaking at about 95–105 msdepending on sound intensity (Zhang et al., 2009), might be due to

Fig. 7. SwLORETA inverse solution performed on brain activity recordedduring the 110–160 ms time window in response to consonant and dissonantchords in musicians. The different colours represent differences in the magni-tude of the electromagnetic signal (nAm). The dipoles are shown as arrows andindicate the position, orientation and magnitude of the dipole modelling solu-tion applied to the ERP waveform in the specific time window. L = left;R = right; numbers refer to the displayed brain slice in the MRI imaging plane.

Table 2. Active electromagnetic dipoles and Talairach coordinates (in mm) of sources attributed to the surface voltages recorded for N1 in response to conso-nant and dissonant chords, regardless of numerosity or closeness of composing tones, between 110–160 ms in musicians and controls

Magn. T-x T-y T-z Hem. Lobe Gyrus BA Function

MUSICIANS (Consonant)6.38 1.5 �11.5 9.9 R Sub-Lobar Thalamus Sound analysis5.71 1.5 �5.6 28.5 R Limbic Cingulate 23/24 Music/Emotions4.79 �8.5 �0.6 �28.2 L Limbic Uncus 28 Music4.07 70.5 �36.6 �1.3 R Temp Middle Temporal 21 Sound analysis

MUSICIANS (Dissonant)4.21 11.3 �11.5 9.9 R Sub-Lobar Thalamus Sound analysis3.93 1.5 �20.3 26.8 R Limbic Cingulate 23 Music/Emotions3.14 �58.5 �9.4 �14.0 L Temp Inferior Temporal 21 Sound analysis

CONTROLS (Consonant)7.61 11.3 �2.1 �13.3 R Limbic Parahippoc. 34 Emotional memory7.43 �8.5 �0.6 �28.2 L Limbic Uncus 28 Music/Emotions7.32 50.8 �0.6 �28.2 R Temp Middle Temporal 21 Sound analysis6.09 1.5 2.4 29.4 R Limbic Anterior Cingulate 24 No-Go inhibition

CONTROLS (Dissonant)6.25 �38.5 9.1 �27.5 L Temp Superior Temporal 38 Sound analysis6.10 21.2 5.3 2.7 R Sub-Lobar Putamen Temporal regularity5.92 1.5 38.2 �17.9 R F Medial Frontal 11 No-Go inhibition5.79 11.3 35.3 5.3 R Limbic Anterior Cingulate 24 No-Go inhibition5.27 �58.5 �26.5 �0.6 L T Middle temporal 21 Sound analysis

Magn., magnitude (in nAm); Hem, hemisphere; BA, Brodmann area.

Fig. 8. (Top) ERPs recorded at the right frontocentral site indicating the mod-ulation of the P300 response (if any) in both participant groups as a function ofchord consonance/dissonance. (Bottom) Isocolor voltage topographic map ofP300 recorded in musicians and controls within the 300–400 ms time windowas a function of chord harmonicity (consonance/dissonance).

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

10 A. M. Proverbio et al.

Page 11: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

the lack of harmonics of pure tones along with the low playbacklevel (42.49 dB). The intensity was set as such to reduce the earpain produced by beats. According to the literature, sounds with har-monics are preferred to pure tones, in terms of neural computing.Electrophysiological evidence for such a preference come from bothERP studies in humans and neurophysiological recordings in cats.For example, Nikjeh et al. (2009) found later P1 latencies to purethan harmonic tones in musicians. Carrasco & Lomber (2011)recorded the neuronal activation times to simple, complex and natu-ral sounds in cat primary and non-primary auditory cortex. Theyfound that signals with a broadband spectrum (noise bursts) inducefaster cortical responses than signals with narrow bands (pure tones).It is also known that the threshold sensitivity for frequency discrimi-nation (the JND; just noticeable difference) is lower for complexthan pure tones (Kollmeier et al., 2008).In the next time window (250–350 ms) at frontocentral sites, only

in musicians the N2 was found to differ as a function of pitch prox-imity, being it much larger for near dissonant tones than for far dis-sonant tones. The presence of this perceptual response only inmusicians indicates that musical expertise can enhance the ability todiscriminate frequency, especially for quartertones that stimulate

overlapping neural fibres. A similar modulation of another anteriornegativity, namely a MMN, resulting from musical expertise hasbeen shown in many ERP studies (e.g. Vuust et al., 2012). This pat-tern of results, which included a lack of a N2 to harmonically devi-ant chords in non-musicians and an N1 modulation for consonantvs. dissonant chords in both groups, shared some similarity with thefindings reported by Virtala et al. (2012). In their study, Virtalaet al. investigated ERPs evoked in response to minor vs. majorchords and found N1 modulation in both musicians and controlsand an MMN effect in musicians but not controls. This frameworksuggests a difference between an automatic sensory response (N1)and a more complex processing that reflects cortical representationof pre-attentive sound (N€a€at€anen, 2008) that is heavily modulatedby musical education and exposure to music.In our study, a frontocentral P3 response (300–400 ms) was

observed in musicians, which was larger for consonant than disso-nant chords. However, this discriminative response was and notobserved in controls, as demonstrated by simple-effect analysis, thusfurther indicating a finer ability of musicians in perceiving harmonictonal and atonal relationships. This interpretation is supported byERP evidence demonstrating the role of P300 in indexing consciousperceptual processing and working memory processes (George &Coch, 2011; Sepp€anen et al., 2012), the larger the P300 deflectionthe more accurate and detailed the representation. In fact, it has beenshown that P300 amplitude may reflect listeners’ sensitivity to fine-grained differences in auditory processing (Toscano et al., 2010).One might argue that dissonant near stimuli (featuring microtones)also contained temporal fluctuations (due to beats) besides spectralinformation: in any case musical expertise enhanced the ability toprocess this type of information.Overall, these findings indicate a finer and more tuned frequency

analysis in the skilled musician’s brain. Consistent with these results

Fig. 9. (Top) ERPs recorded at the midline central site indicating modula-tion of the P300 response in the two groups of participants as a function ofnumerosity of composing sounds. (Bottom) Isocolor voltage topographic mapof P300 recorded in musicians and controls within the 300–400 ms time win-dow as a function of tone numerosity (few vs. many).

Fig. 10. Amplitude values of the P300 response recorded at central andfrontocentral sites in the two groups as a function of numerosity of chordcomposing sounds. Group differences increased with many composingsounds that were processed simultaneously.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 11

Page 12: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

is evidence from psychophysical studies reported by Micheyl et al.(2006) in which pitch discrimination thresholds for pure and com-plex tones measured in 30 classical musicians were more than sixtimes larger than those of the classical musicians. Furthermore, ourpresent results agree with the ERP findings reported by Nikjeh et al.(2009), who measured MMN and P3a responses to pure tones, har-monic complexes and speech syllables in trained musicians vs. non-musicians. In that study, they found that musicians had shorterMMN latencies for all deviances (harmonic tones, pure tones, andspeech).In general, a sensitivity to consonance also can be found in

non-musicians, but with simpler stimuli (2 tones chords) andintervals > 1 tone. For example, Itoh et al. (2003) found largerP2 and smaller N2 responses to dissonant vs. consonant bichordsin non-musicians, but they observed no N1 response. Interest-ingly, post hoc tests performed in that study revealed that thewaveforms were the most negative for the minor second (1 semi-tone) and the least negative (or most positive) for the perfect fifth(7 semitones) for both P2 and N2, which resembles our near vs.far distinction.To investigate the extent to which the perception of dissonance

depended on frequency proximity and beats (near dissonant chords)or to consonance (far dissonant chords), a LORETA source

reconstruction was applied to the N2 response to quartertones minusfar chords. This contrast highlighted a massive activation of limbicand temporal brain areas that differed between the two groups.Overall, quartertone perception was associated with an increasedactivation of the left cingulate cortex and left uncus in musicians,but it was associated with the right cingulate cortex and right uncusin controls. This right hemispheric asymmetry in controls mightreflect an emotional response to negative tones, which is linked totension, discomfort or annoyance, and was likely due to the pres-ence of beats in the sound spectrum of this class of inharmonicchords. In fact, it is generally agreed that the left hemisphere isdominant for processing positive emotions, whereas the right hemi-sphere is dominant for processing negative emotions (Sackeimet al., 1982; Beraha et al., 2012). The motivational approach-with-drawal hypothesis model (Davidson, 1995; Demaree et al., 2005)proposes that happiness and pleasure (classified as approach emo-tions because they indicate a drive of the individual towards theenvironmental stimuli, in the present case: consonant chords) wouldactivate the left hemisphere more than the right hemisphere. In con-trast, pain and disgust (in the present case annoyance for the disso-nant chords and their beats) was associated with withdrawalbehaviours that tend to lead the individual away from the environ-mental sources of aversive stimulation, and this would activate the

Fig. 11. (A) MUSICIANS. Grand-average ERP waveforms recorded in skilled professional musicians as a function of stimulus harmonicity and frequencycloseness. (B) CONTROLS. Grand-average ERP waveforms recorded in non-musicians as a function of stimulus harmonicity and frequency closeness.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

12 A. M. Proverbio et al.

Page 13: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

right hemisphere more than the left hemisphere. This framework ishighly compatible with the activation of the left uncus in bothgroups during perception of consonant chords at the N1 level, aspreviously discussed.The limbic activation responding to quartertones in musicians at

the N2 level was bilateral and included the left uncus and the leftcingulate cortex, possibly indicating a less negative emotionalresponse to dissonance response compared to controls. This patternof activation might be due to their cognitive and professional inter-ests and increased appreciation for dissonant material. In this regard,cross-cultural studies with atonal and non-Western music haveshown that skilled listeners learn to comprehend and appreciate non-Western harmonic structures with repeated exposure and music edu-cation (see Meyers, 2012 for a review). Notwithstanding that, themain source of intracranial activity to quartertones in musicians wasthe right orbitofrontal cortex (BA 11), possibly indexing theresponse to beats and to the unusual acoustic stimulation that isuncharacteristic of the Western tonal system, as was demonstrated inthe literature (Zentner et al., 2008; Sugimoto et al., 2010; Johnson-Laird et al., 2012). The relation between brain activation and aes-thetic experience was not directly investigated in this study, but thiscould be verified by aesthetic judgment experiments using the samestimuli in a follow up study.One of the most relevant findings in this study is the right vs. left

hemispheric specialization for the processing for consonant vs.

dissonant chords that was evident at the sensory level (N1 genera-tors = BA21/38), which showed an inverted direction in a later N2perceptual stage (250–350 ms) in musicians. Musicians exhibited astrong specialization in the left medial/superior temporal cortex forprocessing quartertones and near tonal frequencies. Overall, neu-rometabolic studies reported in the literature suggest that soundswith complex spectral structures activate associative auditory areas(BA21/38) in both hemispheres (e.g. Mirz et al., 1999; Specht &Reul, 2003). The pattern of a left-sided lateralization for quartertoneprocessing in musicians fits with previous literature suggesting aspecialization of left auditory areas as a result of musical education.For example, the MEG study by Pantev et al. (1998) showed thatmicro populations in the left auditory cortex that processed pianotones were enlarged by approximately 25% in musicians comparedwith control subjects who had never played an instrument. Quiteconsistently, Tervaniemi et al. (2011) found a more pronounced lefthemispheric specialization for chord processing in musicians thancontrols. They recorded the magnetic counterpart of the electricalmismatch negativity (MMNm) responses in professional musiciansand non-musicians during perception of short sounds with frequencyand duration deviants (in an easy task) as opposed to C majorchords with C minor chords as deviants (in a more complex task).Non-musicians exhibited less pronounced left hemispheric activationduring the chord discrimination process than did musicians. Interest-ingly, there was no difference in hemispheric lateralization of the

Fig. 11. Continued.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 13

Page 14: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

auditory cortex activity between the two groups for the easy condi-tion (simplified spectral sound structure).Minati et al. (2009) found that musicians had larger hemody-

namic responses in the inferior and middle frontal gyri, premotor

cortex and inferior parietal lobule when they compared consonant todissonant chords during four-tone chords. These blood oxygen leveldependent activation differences were right-lateralized for non-musi-cians and more symmetric for musicians. Similarly, Kung et al.(2014) found a right-sided lateralization in controls and a more bilat-eral pattern of activation for processing dissonant chords. Interest-ingly, Bidelman & Grall (2014) compared neural processing ofconsonant and dissonant dyads (i.e. two-tone chords intervals)within the same octave (unisons to octave, semitone spacing) in nineright-handed non-musicians and found that frequency codingemerged pre-attentively within approximately 150 ms of the onsetof pitch and mainly involved the right superior temporal gyrus. Apositron emission tomography study by Peretz et al. (2001) thatincluded amusic patients and controls found that the parahippocam-pal and the precuneus areas showed cerebral blood flow (CBF)increases as a function of increasing dissonance, whereas the bilat-eral orbitofrontal, medial subcallosal cingulate and right frontal polarcortices showed CBF decreases as a function of increasing disso-nance. In contrast, activity in the superior temporal cortices wasobserved bilaterally and independently of dissonance level. Theauthors concluded that, in controls, dissonance might have beencomputed bilaterally in the superior temporal gyri.The left hemispheric specialization for processing microtones

(and, more in general, for dissonant chords) found in this studyshares some similarities with the analogous of hemispheric asymme-try for high vs. low spatial frequency components of visual informa-tion (e.g. Sergent & Hellige, 1986; Proverbio et al., 1995; seeProverbio et al. (1997) for a review). This hemispheric asymmetryfor sensory coding of stimulus properties is also thought to lead(e.g.) to the specialization of the left fusiform gyrus for letters and

Fig. 12. SwLORETA inverse solution performed on brain activity recorded during the 250–350 ms time window in response to inharmonic ‘few minus manysounds’ in both participant groups. The colours represent differences in the magnitude of the electromagnetic signal (nAm). The dipoles are shown as arrowsand indicate the position, orientation and magnitude of the dipole modelling solution applied to the ERP waveform in the specific time window. L, left; R, right;numbers refer to the brain slice displayed in the MRI imaging plane. A macroscopic group difference was evident in the sensitivity to microtones and a differenthemispheric asymmetry in the involvement of auditory and limbic regions between the two groups (left side in musicians and right side in controls).

Table 3. Active electromagnetic dipoles and Talairach coordinates (in mm)of sources attributed to the surface voltages recorded for N2 in response toclose frequencies featuring quartertones (highly dissonant) between 250–350 ms in musicians (TOP) and controls (Bottom)

Magn. T-x T-y T-z Hem. Lobe Gyrus BA

MUSICIANS10.11 1.5 38.2 �17.9 RH Frontal Orbitofrontal 1110.1 �8.5 7.2 �12.6 LH Frontal Medial

Frontal/AnteriorCingulate

25

10.09 �8.5 �0.6 �28.2 LH Limbic Uncus 2810.05 �58.5 �16.8 �14.8 LH Temporal Medial/

Superior21/38

8.09 1.5 �20.3 26.8 RH Limbic Cingulate 237.12 �8.5 �81.1 30.6 LH Occipital Cuneus 197.09 �18.5 �58.9 14.5 LH Limbic Posterior

Cingulate20

CONTROLS7.89 21.2 9.1 �27.5 RH Limbic Uncus 387.84 50.8 �0.6 �28.2 RH Temporal Medial/superior 21/387.16 1.5 2.4 29.4 RH Limbic Anterior

Cingulate24

6.84 �58.5 �45.8 �9.5 LH Temporal MedialTemporal

37

6.63 50.8 22.4 31.1 RH Frontal Medial Frontal 9

Magn., magnitude (in nAm); Hem, hemisphere; BA, Brodmann area.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

14 A. M. Proverbio et al.

Page 15: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

of the right fusiform gyrus for face processing (Pinel et al., 2015).A similar hemispheric asymmetry of the left vs. right hemispherehas been described for local vs. global processing of hierarchicalstructures (e.g. Delis et al., 1986; Proverbio et al., 1998; Weissman& Woldorff, 2005). Interestingly, the left-sided hemispheric asym-metry for the ability to process analytically auditory complex infor-mation with a similar frequency spectrum might result in the lefthemispheric specialization for phonetic processing in STS areas (e.g.Giraud & Price, 2011). Finally, recent observations in a split-brainpatient have demonstrated that his left hemisphere was better able todiscriminate dissonant (unpleasant) from consonant (pleasant) chords(Prete et al., 2014), thus supporting the hypothesis of a hemisphericasymmetry for fine-grained analysis of auditory frequencies. Alter-nately, the hypothesis of a left vs. right hemispheric asymmetry intemporal vs. spectral content processing of auditory informationanalysis might also be advanced (e.g. Zatorre et al., 2002) in theview that stimuli containing small tone intervals might generatetemporal information related to beats.In conclusion, this study provides direct evidence of an increased

sensitivity to the number, frequency ratio (pitch differences) and har-monicity of pure tone chords in professional musicians as opposedto age-matched controls. This auditory specialization was associatedwith an increased left-lateralization of auditory processing involvingthe left medial temporal area. The lack of such processing in con-trols was also associated with the right-lateralized activation of lim-bic and affective brain areas, possibly supporting the perception ofnegative aesthetic appreciation for dissonant sounds.

Conflict of interest

The authors declare that the research was conducted in the absenceof any real or perceived conflict of interest.

Acknowledgements

This study was funded by 2014-ATE-0030 11324 grant ‘Neuroscience ofMusic’ from University of Milano-Bicocca. We wish to express our gratitudeto Ezia Rizzi for her technical support, and Alberto Zani for his valuableadvice. We are deeply indebted to Mario Guido Scappucci, Andrea Noce,Linda Ittica Paganelli, Giulio Consolati and the anonymous musicians andstudents who volunteered for the study. The authors would like to thank theanonymous reviewers for their helpful and constructive comments thatgreatly contributed to improving the final version of the paper.

Abbreviations

BA, brodmann area; BEM, boundary element model; BOLD, blood oxygenlevel dependent; CBF, cerebral blood flow; EEG, electroencephalogram;ERPs, event-related potentials; FFRs, frequency-following responses; fMRI,functional magnetic resonance; MN, magnetic resonance; MEG, magnetoen-cephalography; MMN, mismatch negativity; MTG, medial temporal gyrus;NAA, N-acetylaspartate; PET, positron emission tomography; swLORETA,standardized weighted low-resolution electromagnetic tomography.

References

Aydin, K., Ciftci, K., Terzibasioglu, E., Ozkan, M., Demirtas, A., Sencer, S.& Minareci, O. (2005) Quantitative proton MR spectroscopic findingsof cortical reorganization in the auditory cortex of musicians. Am. J.Neuroradiol., 26, 128–136.

Bailes, F., Dean, R.T. & Broughton, M.C. (2015) How different are our per-ceptions of equal-tempered and microtonal intervals? A behavioural andEEG survey. PLoS One, 10, e0135082.

Bendor, D. & Wang, X. (2005) The neuronal representation of pitch in pri-mate auditory cortex. Nature, 436, 1161–1165.

Beraha, E.l., Eggers, J., Attar, C.H., Gutwinski, S., Schlagenhauf, F., Stoy,M., Sterzer, P., Kienast, T. et al. (2012) Hemispheric asymmetry for affec-tive stimulus processing in healthy subjects–a fMRI study. PLoS One, 7,e46931.

Bidelman, G.M. & Grall, J. (2014) Functional organization for musical con-sonance and tonal pitch hierarchy in human auditory cortex. NeuroImage,101, 204–214.

Bidelman, G.M. & Heinz, M.G. (2011) Auditory-nerve responses predictpitch attributes related to musical consonance-dissonance for normal andimpaired hearing. J. Acoust. Soc. Am., 130, 1488–1502.

Bidelman, G.M. & Krishnan, A. (2009) Neural correlates of consonance,dissonance, and the hierarchy of musical pitch in the human brainstem.J. Neurosci., 29, 13165–13171.

Bidelman, G.M. & Krishnan, A. (2011) Brainstem correlates of behavioral andcompositional preferences of musical harmony. NeuroReport, 22, 212–216.

Blood, A.J., Zatorre, R.J., Bermudez, P. & Evans, A.C. (1999) Emotionalresponses to pleasant and unpleasant music correlate with activity in paral-imbic brain regions. Nat. Neurosci., 2, 382–387.

Borchard, J.P., Barry, R.J. & De Blasio, F.M. (2015) Sequential processingin an auditory equiprobable Go/NoGo task with variable interstimulusinterval. Int. J. Psychophysiol., 97, 145–152.

Bowling, D.L. & Purves, D. (2015) A biological rationale for musical conso-nance. Proc. Natl. Acad. Sci. USA, 112, 11155–11160.

Burns, E.M. & Houtsma, A.J.M. (1999) The influence of musical training onthe perception of sequentially presented mistuned harmonics. Acad. Soc.Am., 106, 3564–3570.

Butler, J.W. & Daston, P.G. (1968) Musical consonance as musical prefer-ence: across cultural study. Exp. Psychol., 79, 129–142.

Carrasco, A. & Lomber, S.G. (2011) Neuronal activation times to simple,complex, and natural sounds in cat primary and nonprimary auditory cor-tex. J. Neurophysiol., 106, 1166–1178.

Cousineau, M., McDermott, J.H. & Peretz, I. (2012) The basis of musicalconsonance as revealed by congenital amusia. Proc. Natl. Acad. Sci. USA,109, 19858–19863.

Crespo-Bojorque, P. & Toro, J.M. (2014) The use of interval ratios in conso-nance perception by rats (Rattus norvegicus) and humans (Homo sapiens).Comp. Psychol., 129, 42–51.

Davidson, R.J. (1995) Cerebral asymmetry, emotion, and affective style. InDavidson, R.J. & Hughdahl, K. (Eds), Brain Asymmetry. MIT Press, Cam-bridge, MA, pp. 361–387.

Delis, D.C., Robertson, L.C. & Efron, R. (1986) Hemispheric specializationof memory for visual hierarchical stimuli. Neuropsychologia, 24, 205–214.

Demaree, H.A., Everhart, D.E., Youngstrom, E.A. & Harrison, D.W. (2005)Brain lateralization of emotional processing: historical roots and a futureincorporating “dominance”. Behav. Cogn. Neurosci. Rev., 4, 3–20.

Drake, C., Penel, A. & Bigand, E. (2000) Tapping in time with mechanicallyand expressively performed music. Music. Percept., 18, 1–23.

Fishman, Y.I., Volkov, I.O., Noh, M.D., Garell, P.C., Bakken, H., Arezzo,J.C., Howard, M.A. & Steinschneider, M. (2001) Consonance and disso-nance of musical chords: neural correlates in auditory cortex of monkeysand humans. J. Neurophysiol., 86, 2761–2788.

Gaser, C. & Schlaug, G. (2003) Brain structures differ between musiciansand non-musicians. J. Neurosci., 23, 9240–9245.

George, E.M. & Coch, D. (2011) Music training and working memory: anERP study. Neuropsychologia, 49, 1083–1094.

Giraud, A.L. & Price, C.J. (2011) The constraints functional neuroimagingplaces on classical models of auditory word processing. J. Cogn. Neu-rosci., 13, 754–765.

Hulse, S.H., Bernard, D.J. & Braaten, R.F. (1995) Auditory discrimination ofchord-based spectral structures by European starlings (Sturnus vulgaris). J.Exp. Psychol. Gen., 124, 409–423.

Hutchinson, W. & Knopoff, L. (1978) The acoustic component of westernconsonance. J. New Music Res., 7, 1–29.

Itoh, K., Suwazono, S. & Nakada, T. (2003) Cortical processing of musicalconsonance: an evoked potential study. NeuroReport, 14, 2303–2306.

Izumi, A. (2000) Japanese monkeys perceive sensory consonance of chords.J. Acoust. Soc. Am., 108, 3073–3078.

Johnson-Laird, P.N., Kang, O.E. & Leong, Y.C. (2012) On musical disso-nance. Music Percept., 30, 19–35.

Kadia, S.C. & Wang, X. (2003) Spectral integration in A1 of awake pri-mates: neurons with single- and multipeaked tuning characteristics. J. Neu-rophysiol., 89, 1603–1622.

Koelsch, S. (2005) Investigating emotion with music: neuroscientificapproaches. Ann. N. Y. Acad. Sci., 1060, 412–418.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 15

Page 16: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

Kollmeier, B., Brand, T. & Meyer, B. (2008) Perception of speech andsound. In Benesty, J., Sondhi, M.M. & Huang, Y.A. (Eds), SpringerHandbook of Speech Processing. Springer Berlin Heidelberg, Berlin, p.65.

Kung, C.C., Hsieh, T.H., Liou, J.Y., Lin, K.J., Shaw, F.Z. & Liang, S.F.(2014) Musicians and non-musicians’ different reliance of features in con-sonance perception: a behavioral and ERP study. Clin. Neurophysiol., 125,971–978.

Kuriki, S., Kanda, S. & Hirata, Y. (2006) Effects of musical experience ondifferent components of MEG responses elicited by sequential piano-tonesand chords. J. Neurosci., 26, 4046–4053.

Meyers, C.K. (2012) Influences on music preference formation. PURE Ins. 1,Article 7.

Micheyl, C., Delhommeau, K., Perrot, X. & Oxenham, A.J. (2006) Influenceof musical and psychoacoustical training on pitch discrimination. HearingRes., 219, 36–47.

Minati, L., Rosazza, C., D’Incerti, L., Pietrocini, E., Valentini, L., Scaioli,V., Loveday, C. & Bruzzone, M.G. (2009) Functional MRI/event-relatedpotential study of sensory consonance and dissonance in musicians andnonmusicians. NeuroReport, 20, 87–92.

Mirz, F., Ovesen, T., Ishizu, K., Johannsen, P., Madsen, S., Gjedde, A. &Pedersen, C.B. (1999) Stimulus-dependent central processing of auditorystimuli: a PET study. Scand. Audiol., 28, 161–169.

Moore, B.C. & Gockel, H.E. (2011) Resolvability of components in complextones and implications for theories of pitch perception. Hearing Res., 276,88–97.

Näätänen, R. (2008) Mismatch negativity (MMN) as an index of centralauditory system plasticity. Int. J. Audiol., 47, S16–S20.

N€a€at€anen, R. & Picton, T. (1987) The N1 wave of the human electric andmagnetic response to sound: a review and an analysis of the componentstructure. Psychophysiology, 24, 375–425.

N€a€at€anen, R., Sams, M., Alho, K., Paavilainen, P., Reinikainen, K. & Soko-lov, E.N. (1988) Frequency and location specificity of the human vertexN1 wave. Electroen. Clin. Neuro., 69, 523–531.

Nikjeh, D.A., Lister, J.J. & Frisch, S.A. (2009) Preattentive cortical-evokedresponses to pure tones, harmonic tones, and speech: influence of musictraining. Ear Hearing, 30, 432–446.

Oostenveld, R. & Praamstra, P. (2001) The five percent electrode system forhigh-resolution EEG and ERP measurements. Clin. Neurophysiol., 112,713–719.

Palmero-Soler, E., Dolan, K., Hadamschek, V. & Tass, P.A. (2007) SwLOR-ETA: a novel approach to robust source localization and synchronizationtomography. Phys. Med. Biol., 52, 1783–1800.

Pantev, C., Oostenveld, R., Engelien, A., Ross, B., Roberts, L.E. & Hoke,M. (1998) Increased auditory cortical representation in musicians. Nature,392, 811–814.

Park, H.J., Li, R.X., Kim, J., Kim, S.W., Moon, D.H., Kwon, M.H. & Kim,W.J. (2009) Neural correlates of winning and losing while watching soccermatches. Int. J. Neurosci., 119, 76–87.

Pasqual-Marqui, R.D., Michel, C.M. & Lehmann, D. (1994) Low resolutionelectromagnetic tomography: a new method for localizing electrical activ-ity in the brain. Int. J. Psychophysiol., 18, 49–65.

Patel, S.H. & Azzam, P.N. (2005) Characterization of N200 and P300:selected studies of the event-related potential. Int. J. Med. Sci., 2, 147–154.

Perani, D., Saccuman, M.C., Scifo, P., Spada, D., Andreolli, G., Rovelli, R.,Baldoli, C. & Koelsch, S. (2010) Functional specializations for music pro-cessing in the human newborn brain. Proc. Natl. Acad. Sci. USA, 107,4758–4763.

Peretz, I., Blood, A.J., Penhune, V. & Zatorre, R. (2001) Cortical deafness todissonance. Brain, 124, 928–940.

Pinel, P., Lalanne, C., Bourgeron, T., Fauchereau, F., Poupon, C., Artiges,E., Le Bihan, D., Dehaene-Lambertz, G. et al. (2015) Genetic and envi-ronmental influences on the visual word form and fusiform face areas.Cereb. Cortex, 25, 2478–2493.

Plomp, R. & Levelt, W.J.M. (1965) Tonal consonance and critical band-width. J. Acoust. Soc. Am., 38, 548–560.

Polich, J. (2007) Updating P300: an integrative theory of P3a and P3b. Clin.Neurophysiol., 118, 2128–2148.

Prete, G., Fabri, M., Foschi, N., Brancucci, A. & Tommasi, L. (2014) The“consonance effect” and the hemispheres: a study on a split-brain patient.Laterality, 20, 257–269.

Proverbio, A.M., Zani, A., Gazzaniga, M.S. & Mangun, G.R. (1995) VEPevidence of hemispheric asymmetries for spatial frequency processing in a

split-brain patient. Proceedings of the Cognitive Neuroscience Society,Second Annual Meeting. S. Francisco.

Proverbio, A.M., Zani, A. & Avella, C. (1997) Hemispheric asymmetries forspatial frequency discrimination in a selective attention task. Brain Cogni-tion, 34, 311–320.

Proverbio, A.M., Minniti, A. & Zani, A. (1998) Electrophysiological evi-dence of a perceptual precedence of global vs. local visual information.Cogn. Brain Res., 6, 321–334.

Regnault, P., Bigand, E. & Besson, M. (2001) Different brain mechanismsmediate sensitivity to sensory consonance and harmonic context: evidencefrom auditory event-related brain potentials. J. Cogn. Neurosci., 13, 241–255.

Sackeim, H.A., Greenberg, M.S., Weiman, A.L., Gur, R.C., Hungerbuhler,J.P. & Geschwind, N. (1982) Hemispheric asymmetry in the expression ofpositive and negative emotions. Neurologic evidence. Arch. Neurol., 39,210–218.

Schlaug, G., Jancke, L., Huang, Y. & Steinmetz, H. (1995) In vivo evi-dence of structural brain asymmetry in musicians. Science, 267,699–701.

Schoen, D. & Besson, M. (2005) Visually induced auditory expectancy inmusic reading: a behavioral and electrophysiological study. J. Cogn. Neu-rosci., 17, 694–705.

Schoen, D., Regnault, P., Ystad, S. & Messon, M. (2005) Sensory conso-nance: an ERP study. Music Percept., 23, 105–117.

Sel, A. & Calvo-Merino, B. (2013) Neuroarchitecture of musical emotions.Rev. Neurol., 56, 289–297.

Sepp€anen, M., Pesonen, A.K. & Tervaniemi, M. (2012) Music trainingenhances the rapid plasticity of P3a/P3b event-related brain potentials forunattended and attended target sounds. Atten. Percept. Psycho., 74, 600–612.

Sergent, J. & Hellige, J. (1986) The role of input factors in visual field asym-metries. Brain Cogn., 5, 174–199.

Specht, K. & Reul, J. (2003) Functional segregation of the temporal lobesinto highly differentiated subsystems for auditory perception: an auditoryrapid event-related fMRI-task. NeuroImage, 20, 1944–1954.

Sugimoto, T., Kobayashi, H., Nobuyoshi, N., Kiriyama, Y., Takeshita, H.,Nakamura, T. & Hashiya, K. (2010) Preference for consonant music overdissonant music by an infant chimpanzee. Primates, 51, 7–12.

Tervaniemi, M., Sannemann, C., N€oyr€anen, M., Salonen, J. & Pihko, E.(2011) Importance of the left auditory areas in chord discriminationin music experts as evidenced by MEG. Eur. J. Neurosci., 34,517–523.

Toscano, J.C., McMurray, B., Dennhardt, J. & Luck, S.J. (2010) Continuousperception and graded categorization: electrophysiological evidence for alinear relationship between the acoustic signal and perceptual encoding ofspeech. Psychol. Sci., 21, 1532–1540.

Trainor, L.J., Desjardins, R.N. & Rockel, C. (1999) A comparison of contourand interval processing in musicians and nonmusicians using event- relatedpotentials. Aust. J. Psychol., 51, 147–153.

Virtala, P., Huotilainen, M., Putkinen, V., Makkonen, T. & Tervaniemi, M.(2012) Musical training facilitates the neural discrimination of major ver-sus minor chords in 13-year-old children. Psychophysiology, 49, 1125–1132.

Virtala, P., Huotilainen, M., Partanen, E., Fellman, V. & Tervaniemi, M.(2013) Newborn infant’s auditory system is sensitive to western musicchord categories. Front. Psychol., 19, 644.

Virtala, P., Huotilainen, M., Partanen, E. & Tervaniemi, M. (2014) Musician-ship facilitates the processing of Western music chords—An ERP andbehavioral study. Neuropsychologia, 61, 247–258.

Vuust, P., Brattico, E., Sepp€anen, M., N€a€at€anen, R. & Tervaniemi, M.(2012) Practiced musical style shapes auditory skills. Ann. N. Y. Acad.Sci., 1252, 139–146.

Weissman, D.H. & Woldorff, M.G. (2005) Hemispheric asymmetries for dif-ferent components of global/local attention occur in distinct temporo-parie-tal loci. Cereb. Cortex, 15, 870–876.

Zanow, F. & Knosche, T.R. (2004) ASA-Advanced Source Analysis ofcontinuous and event-related EEG/MEG signals. Brain. Topogr., 16,287–290.

Zatorre, R.J., Belin, P. & Penhune, V.B. (2002) Structure and function ofauditory cortex: music and speech. Trends Cogn. Sci., 6, 37–46.

Zentner, M.R. & Kagan, J. (1998) Infant’s perception of consonance and dis-sonance in music. Infant Behav. Dev., 21, 483–492.

Zentner, M., Grandjean, D. & Scherer, K.R. (2008) Emotions evoked by thesound of music: characterization, classification, and measurement. Emo-tion, 8, 494–521.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

16 A. M. Proverbio et al.

Page 17: Brain processing of consonance/dissonance in musicians and ... · The imaging data revealed that the planum temporalis was more lateralized to the left side in ... (frequency-based

Zhang, F., Eliassen, J., Anderson, J., Scheifele, P. & Brown, D. (2009)The time course of the amplitude and latency in the auditory lateresponse evoked by repeated tone bursts. J. Am. Acad. Audiol., 20,239–250.

van Zuijen, T.L., Sussman, E., Winkler, I., N€a€at€anen, R. & Tervaniemi, M.(2005) Auditory organization of sound sequences by a temporal or numeri-cal regularity: a mismatch negativity study comparing musicians and non-musicians. Cogn. Brain Res., 23, 270–276.

© 2016 Federation of European Neuroscience Societies and John Wiley & Sons LtdEuropean Journal of Neuroscience, 1–17

Left hemispheric asymmetry in musicians 17