26
The Daughters of Memory: Language, Emotion, and the Neuroscience of Music, Part 2. Christopher Collins New York University [The selection below is the second part of “The Daughters of Memory,” a chapter from a work-in-progress that sets out to explore 1) the transformations that narrative and lyric composition underwent when their medium shifted from public performance to private reading and 2) the neurocognitive implications of this shift for visual and auditory imagination. Since the selection below is Part 2, it should be read as a continuation of the argument introduced in Part 1, “Music, Language, and the News from Mt. Helicon.” I hope soon to upload the final part of this chapter/article, Part 3, which will examine the motor aspects of the Muses art, mousikê, from overt dance movements to covert (inner) speech, and conclude with some thoughts on the pleasure of musical experience.] 1

The Daughters of Memory, Part 2

  • Upload
    nyu

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

The Daughters of Memory:Language, Emotion, and the Neuroscience of Music,

Part 2.

Christopher CollinsNew York University

[The selection below is the second part of “The Daughters of

Memory,” a chapter from a work-in-progress that sets out to

explore 1) the transformations that narrative and lyric

composition underwent when their medium shifted from public

performance to private reading and 2) the neurocognitive

implications of this shift for visual and auditory imagination.

Since the selection below is Part 2, it should be read as a

continuation of the argument introduced in Part 1, “Music,

Language, and the News from Mt. Helicon.” I hope soon to upload

the final part of this chapter/article, Part 3, which will examine

the motor aspects of the Muses art, mousikê, from overt dance

movements to covert (inner) speech, and conclude with some

thoughts on the pleasure of musical experience.]

1

Music and the Emotional Brain.

When, in discussing the differences between music and

language, I cited Tecumseh Fitch (Part 1, p. 19), I did not

mention his final distinction, the fact that, whereas language is

a means of referring to persons, objects, events, and ideas, music

is non-referential communication. We are confronted therefore with

a “design feature . . . easily the most difficult to pinpoint, and

the topic of a vast, ancient and controversial literature: the

question of meaning in music. On the one hand, . . . music is

clearly not meaningful in the way language is (able to convey an

unlimited number of propositional thoughts or ‘meanings’ with

2

arbitrary specificity). On the other hand, music is not

meaningless: music is expressive in some different, hard-to-define

sense. It is often said that music ‘expresses the emotions’ ”

(Fitch, 2006:180). .

Before I explore the premise that musical meaning is non-

referential and that therefore the emotions it evokes are

sometimes difficult to specify, I want to observe that not all

music fits that description. Like any other human artifact, a

piece of music may refer to aspects of the culture in which it

was produced and may do so by deploying indexical, iconic, and

symbolic signs.1 Indexical signs are operative whenever the

style of the piece, or the piece itself, is so closely

associated with a particular social activity that the latter

supplies its affective frame. For examples of social context,

consider Elgar's “Pomp and Circumstance” and Wagner's “Bridal

Chorus” from Tannhäuser. No one wonders what range of affect

1 . The semioticians of music that I have so far encountered have tended to pitch their theorizing at a fine-grained level of detail, often drawing on divergent theoriesof sign, e.g., those of Greimas, Saussure, Barthes, Chomsky,and Peirce. The Peircean analysis that I apply here, admittedly elemental, is, I think, adequate to my purpose.

3

each is intended to evoke. Insofar as they are associated, one

with academic, the other with nuptial, rituals, they are not

what some musicologists term “absolute music.” To assess their

referentiality, try to imagine the effect on the audience of

playing the graduation march at a wedding and the wedding march

at a graduation. Music can also incorporate iconic signs,

sounds that resemble, for example, birdsong, wind, lapping

water, thunder, galloping horses, and explosions, imitative

affects that characterize “program music.”

While sonic indices and icons signify by associations, the

later-evolved symbolic code of language signifies more directly

and, as an intrinsic feature of song, is most undeniably

referential. Unlike instrumental music in the Western concert

hall tradition of the past three centuries, most world music,

now and over time, has regularly included words. Songs have a

right to be considered just as ancient and “musical” as have

instrumental compositions. For the ancient Greeks, mousikê was

a perfectly natural blend of words, instrumental music, and

dance.

Moreover, a song’s verbal content, not its musical

4

structure, is most responsible for establishing its emotional

meaning. Two folk ballads, for example, may use the same tune,

but tell quite different stories and a Baptist hymn, a romantic

maritime ballad, and the miners’ union organizing song from

Kentucky, “Which Side Are You On?,” are all set to the same

melody. Then there was that wry English drinking song,

“Anacreon in Heaven,” that got itself reworded as a national

anthem, the instrumental version of which can all by itself

stir deep patriotic emotion. Even when its precise words are

forgotten, a song’s verbal message may be subconsciously

recalled. As Freud remarked, “Whoever takes the trouble . . .

to note the melodies he finds himself humming, unintentionally

and often without noticing he is doing so, will quite regularly

be able to discover the connection between the text of the song

and a subject [Thema] that is occupying his mind”

(1904/1917:174), an idea that his student, Theodor Reik, was to

elaborate in his study of spontaneous music-and-word recall, The

Haunting Melody (1953).

The function of indexical, iconic, and symbolic signs, when

5

musically conveyed, is to construct an imaginary outer setting

within which an inner series of emotional events may be

enacted. These events are indeed nonreferential, but, before

venturing into the controversial topic of musical emotions, we

need to acquaint ourselves with some of the terms of its

debate. If we are trying to locate the ultimate meaning of

music in “emotions,” we first need to agree upon what we mean

by “emotions.” Do we mean those innate responses to stimulus-

events that quickly mobilize the body to act in certain ways,

e.g., to fight, flee, mate, eat, etc.? Do we mean somewhat

longer lasting affects, e.g., happiness, sadness, anxiety,

resentment, etc.? Or do we mean less definable states of

“mixed emotion,” including feelings and moods, and, if there

are “mixed emotions,” does that mean that emotions, like

primary colors, are the simple elements out of which all other

affects are derived by a process of blending (Trost et al.,

2012)?

However we define them, we are likely to agree that the

emotions we feel when listening to music are not exactly the same

6

as those we feel when encountering and interacting with objects

and persons in our environment. Unlike those latter affects that

we involuntarily experience, music offers certain emotional cues

and then invites us to choose to imagine we are feeling a particular

emotion. Here another distinction is crucial, that between

perceived and induced emotion, i.e., between an emotion that we

recognize as being expressed by the composer through the performer

and an emotion that we feel we are now experiencing within

ourselves through the music. This is evidently an instance of

alternate perspectives: when we perceive emotion in the music, we

do so from our detached vantage point, but when we feel it within

us, we do so by embodying the emotional patterns represented in

the music and by taking the perspective of some imagined other, be

that the composer, the performer, or some idealized subject (cf.

MacWhinney, 1999).

After exposing a wide population of listeners to a wide range

of musical genres and then polling them, researchers at the

University of Geneva’s Emotion and Music Lab arrived at sixty-six

music-induced affects. These they then reduced to nine categories,

7

ranking them from the most to the least frequently reported, and

published the results as the Geneva Emotional Music Scales (GEMS):

Wonder (Filled with wonder, amazed, allured, dazzled, admiring, moved)

Transcendence (Inspired, feeling of transcendence, feeling of spirituality,

overwhelmed, thrills)

Tenderness (In love, sensual, affectionate, tender, mellowed)

Nostalgia (nostalgic, melancholic, dreamy, sentimental)

Peacefulness (Calm, relaxed, serene, soothed, meditative)

Power (Energetic, triumphant, fiery, strong, heroic)

Joyful activation (Stimulated, joyful, animated, feel like dancing, amused)

Tension (Agitated, nervous, tense, impatient, irritated)

Sadness (Sad, sorrowful, blue)

(Zentner & Eerola, 2011: 206)

According to these researchers, affective states are normally

experienced in terms of valence, ranging from extremely positive

to extremely negative values. A positive affective state is

produced by a circumstance, seeming to favor the perceiver, that

8

evokes an impulse to approach the stimulus-object. A negative

affect is produced by some circumstance that, seeming to thwart or

endanger the perceiver, evokes an impulse to avoid, withdraw, or

counteract that stimulus-object. But music represents valences,

negative and positive, without objective circumstances. “The

perceptions of negative emotional characteristics do not readily

translate into felt negative emotion because the listener in most

music-listening contexts is safely removed from threats, dangers,

or the possibility of losses” (Zentner et al., 2008:501).

Listeners become “somewhat detached from everyday concerns. A

clear expression of this detachment is that dreamy was among the

most frequent emotive responses to music in the current studies… .

As people move into a mental state in which self-interest and

threats from the real world are no longer relevant, negative

emotions lose their scope” (513). As the GEMS list indicates, the

predominant function of music appears to be to evoke the sort of

affects that, as Hesiod said, give listeners “forgetfulness of

troubles and a respite from worries. (Theogony, 55).

The propositions that psychological phenomena are the doings

of an embodied mind and that this embodiment, being the brain and

9

the several nervous systems, has evolved over eons are now

generally accepted. Most contemporary investigators therefore

assume that what philosophers have termed “aesthetic emotions,”

including most of the affects on the GEMS list, must be grounded

somehow in everyday, basic emotions and that the midbrain, the

limbic system, must be where these reactions take place. The most

often listed basic emotions are anger, fear, happiness, sadness,

disgust, but psychologists of music have opted for a modified

list. Precisely how music gains access to our embodied minds has

become a concern of neuroscientists over the past several decades,

some of whom have continued to focus on those basic emotions.

In 2003, Patrik Juslin and Petri Laukka published a paper that

closely aligned the acoustic properties of speech and music that

prompt listeners’ emotional responses. The authors regarded five

emotions, anger, fear, happiness, sadness, and tenderness (in

place of disgust), as sufficiently different to be designated

“discrete emotions.” These are the “basic emotions” that both

music and speech can elicit. (In the following list, note that

the forward-slashes separate vocal speech expression, to the left,

10

from music performance, to the right; acoustic cues without

slashes are simply shared phenomena; and F0 means fundamental

frequency):

Anger. Fast speech rate/tempo, high voice intensity/sound

level, much voice intensity/sound level variability, much

high-frequency energy, high F0/pitch level, much F0/pitch

variability, rising F0/pitch contour, fast voice

onsets/tone attacks, and microstructural irregularity

Fear. Fast speech rate/tempo, low voice intensity/sound

level (except in panic fear), much voice intensity/sound

level variability, little high-frequency energy, high

F0/pitch level, little F0/pitch variability, rising

F0/pitch contour, and a lot of microstructural

irregularity

Happiness. Fast speech rate/tempo, medium–high voice

intensity/sound level, medium high-frequency energy, high

F0/pitch level, much F0/pitch variability, rising F0/pitch

contour, fast voice onsets/tone attacks, and very little

microstructural regularity

11

Sadness. Slow speech rate/tempo, low voice intensity/sound

level, little voice intensity/sound level variability,

little high-frequency energy, low F0/pitch level, little

F0/pitch variability, falling F0/pitch contour, slow voice

onsets/tone attacks, and microstructural irregularity

Tenderness. Slow speech rate/tempo, low voice

intensity/sound level, little voice intensity/sound level

variability, little high-frequency energy, low F0/pitch

level, little F0/pitch variability, falling F0/pitch

contours, slow voice onsets/tone attacks, and

microstructural regularity (Juslin & Laukka, 2003: 802.)

Juslin (2010) went on to propose a framework for further

research in the affective neuroscience of music that he has called

BRECVEM, an acronym for seven factors, or mechanisms, activated by

the hearing of music. They are: Brainstem Reflex, Rhythmic Entrainment,

Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, and Musical

Expectancy. The fact that these seven operate at various levels

below the threshold of consciousness, “leads to the intriguing

scenario that you may know that what you hear is ‘just music,’ but

12

the mechanisms [in the brain] that evoke your emotions do

not . . . . [A]t least some mechanisms do not necessarily treat

musical stimuli as different from other stimuli” (Juslin,

2013a:239). Juslin’s seven affect-inducing mechanisms operate as

follows:

1. Brainstem reflex. This premammalian capacity to respond

quickly to danger is activated by any relatively sudden, loud,

accelerated, or dissonant sound. The least “musical” of his

musical mechanisms, this alarm reflex has more in common with what

I have characterized as the rhythmical irregularity of language

(Brown & Weishaar, 2010).

2. Rhythmical entrainment. Juslin’s second mechanism is the

opposite of the “brainstem reflex.” It is the special way humans

can respond to a regularized beat––a repetitive pulse, rather than

one that is single and sudden—by regularizing their own heart

beats and inducing in them a common motor periodicity. When this

happens, they “lock in” to the rhythm, tap their feet, bob their

heads, and may even move about in dance patterns. This mechanism

creates that sense of oneness with others that Ian Cross (1999)

13

believes is the primary social benefit of music.

3. Evaluative conditioning. This mechanism recognizes

valence distinctions, i.e., positive and negative affects prompted

by a given musical signal. Juslin regards this as conditioned

learning, sound-cued associations linked to emotionally charged

moments in one's past and compares this effect with that produced

by Wagner’s recurrent leitmotifs.

4. Contagion. By this he means the empathetic process by

which perceived emotion becomes felt emotion. Music induces this

contagion, he proposes, by imitating certain vocal pitches and

timbres iconically associated with certain emotions. This appears

to be a mirror-neuron-like effect, our pre-motor cortex

recognizing some emotions when vocally conveyed, and the limbic

system automatically replicating them (Scherer & Zentner, 2001;

Juslin & Laukka, 2003; Koelsch et al., 2006).

5. Visual imagery. Juslin suggests that music can prompt the

listener to visualize spatial settings, landscapes, seascapes,

etc. While this does not seem to me a very persuasive point, the

auditory and visual cortexes do communicate with one another. Our

14

sense of hearing can locate where sound is coming from, its speed

and direction of movement and perhaps its size of its source and

thus provide us with a rough-and-ready spatial map. Moreover if

what we hear requires an immediate reaction, our visual system

rapidly forms a mental image of its source-object (Koelsch, 2013).

6. Episodic memory. Here a piece of music evokes an emotion

associated with one's autobiographical past and, with that

emotion, a sense of nostalgia and its attendant moods. This would

seem to overlap to some extent with Juslin’s “evaluative

conditioning.” “Episodic memory” may occasionally be cued by

music, but, like “visual imagery,” it seems more epiphenomenal

than intrinsic to musical experience.

7. Musical expectancy. When a musical feature, e.g., a note,

a melodic phrase, or an entire melody, either confirms, postpones,

or violates a listener's expectation, it makes an affective

effect. This is indeed an essential factor in musical reception.

Dependent on an audience’s foreknowledge of a piece of music,

expectation may be produced by several features: 1) a familiar

style with a familiar “syntax,” 2) a use of repetitive structures,

15

rhythmic, melodic, and (if sung) verbal, and 3) its status as an

artifact that is re-performed often enough that listeners are able

to anticipate its progressive variations (Huron, 2006).

Expectancy, when confirmed, gives listeners the illusion that they

are participating, singing along with the music and sharing the

emotions it represents (cf. “Contagion;” see also Cross, 2010).

Expectancy, when not immediately confirmed, produces in us a

suspenseful anticipation that teases and primes our reward

circuitry, so that, when the withheld phrase is finally sounded,

we experience an affective climax (Schultz, 1998; Salimpoor &

Zatorre, 2013). Expectancy, when it is occasionally violated,

produces a sense of surprise that converts the emotion from one

that is felt to one that is perceived. But, if one becomes familiar

with the piece, even a wholly unfulfilled expectancy becomes

predictable. Even a loud dissonance that jars us and triggers a

“brainstem reflex” effect can be pleasurably anticipated.

“The BRECVEM framework… proceeds from the idea that many of

the psychological mechanisms do not have access to, or take into

consideration, information about whether the object of attention

16

is ‘music’ or not—the mechanisms respond to certain information,

wherever it occurs” (Juslin & Sloboda, 2013:613–614). This sounds

a lot like Steven Pinker's (1999/2009:525) definition of the arts,

especially music, as technologies we use to “push our pleasure

buttons.”2 Be that as it may, Juslin’s initial BRECVEM framework

was generally well received by researchers determined to prove

that music, like language, is a window into the human mind.

Affective responses to music can be quite strong, but if we

define a basic emotion as a response to something in our

environment that evokes a withdrawal or approach behavior, we find

in music no such objective stimuli. Moreover, much of what we

2 To my knowledge, Nicholas Cook (2014) was the first to notice this resemblance. Pinker's provocative assessment of music as “auditory cheesecake” (1999/2009:534) rather than an evolutionary adaptation, has incited—and I use that word advisedly—much research into the cognitive and affective neuroscience of music in the new century. Fitch (2006:199–200)responded to Pinker's claim by pointing out that the fossil evidence suggests that human anatomy could have produced musical sounds before the evolutionary split between Neanderthals and Homo sapiens sapiens; that bone flutes have been unearthed, estimated to be over 40,000 years old; that, “unlikecheesecake,” music and dance are behaviors enjoyed in all humancommunities and are as universal as language. Moreover, since music making, being a loud activity, would have attracted predators and was energy-expensive to produce, it therefore hadto have had offsetting benefits for our Paleolithic ancestors.

17

classify as music evokes in us much more nuanced affects (Zentner

et al., 2008; Marin & Bhattacharya, 2010). It is certainly easier

to test for responses that are strong and unambiguous, so the

stronger and simpler the data, the more persuasive their

interpretation. But this reminds one of the joke about the man

searching for something under a street light who, when asked by a

passerby what he was doing, explains he is looking for his keys.

“So you think you dropped them here?” “Oh no,” the searcher

answers. “I dropped them up the block. It’s just that it’s

easier to see under this light.”

As brain-imaging technology has improved over the past twenty

years and the illumination it casts has become broader and

sharper, researchers have persevered in searching the brain for

keys to the emotional meanings of music. As Zentner and his

colleagues have shown, most listeners to music do so because,

however we choose to phrase it, music pushes our pleasure buttons.

How it does so—how it maneuvers this massively complex network of

neurons into “feeling good”—is itself a massively complex problem.

We know that music activates circuits in both hemispheres, some

18

related to the serial processing centers in the left cortex,

others exploiting the holistic networks on the right. Some are

dedicated to tone resolution and regularized rhythm, while others

partly overlap with those dedicated to speech production and

perception.

When Anne Blood and Robert Zatorre published the results of a

brain imaging experiment in 2001, their title served as a mini-

abstract: “Intensely pleasurable responses to music correlate with

activities in brain regions implicated in reward and emotion.”

Positron emission tomography (PET scans) was used to monitor

changes in the limbic system in response to musical selections as

chosen by subjects who had in the past experienced euphoric

“shivers-down-the-spine,” or “chills,” while listening to them.

During the experiment, when listeners reported this emotional

experience, certain brain regions were seen to become activated or

deactivated. Most significantly, blood flow to the amygdala

decreased, while flow to the close-by ventral striatum increased.

This was seen possibly to

indicate gating between behaviorally antagonistic ‘approach’ and ‘withdrawal’ systems. The amygdala is

19

known to be involved in fear and other aversive emotions, as well as evaluative processes associated with socially and biologically relevant emotions, whereas ventral striatum mediates evaluative processes associated with reward and motivation approach behavior. Thus, activation of the reward system by music may maximize pleasure, not only by activating the reward system but also by simultaneously decreasing activity in brain structures associated with negative emotions.

The authors also noted a deactivation of the hippocampus, an

organ associated with the retrieval of stressful episodic

memories (Blood & Zatorre, 2001: 11822–11823).

“Gating” is an interesting concept. In electric circuitry,

from which it is borrowed, gating is governed by a transistor

that turns on or off an electrical signal. One might think of

this as a more refined and selective version of the toggle

switch we use to turn a light on or off. As a brain function,

gating involves the inhibiting of one brain area while at the

same time exciting another area. The process Blood and Zatorre

refer to is synaptic gating, the mechanism by which neurons, in

this case, populations of neurons, are either deactivated or

activated, thereby selectively blocking or opening up the

transmission of signals from one brain region to another. As

they interpret their data, the regions of the amygdala

20

associated with anxiety shut down proportionately as the

ventral striatum, associated with pleasurable rewards, turns

on. This is not an either/or process, a bistable flip-flop,

like the toggling of an electrical switch, but is rather like a

seesaw: as one rises, the other descends.

We observe this in other reciprocal processes. For

example, when we move our arms and legs, we depend on the

synergy of opposing pairs of muscles, the biceps and the

triceps to control the flexions and extensions of the elbow and

the quadriceps and hamstrings to control the similar movements

of the knee. Climbing, lifting, and walking would be

impossible if the brain could not synchronize these

complementary opposites, contracting one to the same degree

that it relaxes the other. This action is mentally represented

as a kinaesthetic dyad (see Part 1, p. 20). When, for example,

we lift a weight, we are focally attentive of our biceps

tensing and shortening and peripherally aware that, opposite to

it, our triceps is relaxing and lengthening.

According to Hesiod’s prescientific formulation, the

function of the Muses’ art, mousikê, is to adjust the relation

21

of memory to forgetting (see Part 1, p. 8). Memory, Mnêmosunê,

is the mother of the nine goddesses whose special gift to gods

and humans is the bliss of temporary forgetfulness, lêsmosunê.

If we are to solve this Hesiodic enigma with the help of modern

cognitive science, we must first identify mnêmosunê with

declarative memory, then distinguish within that faculty two

systems, semantic memory, our store of general knowledge

culturally transmitted, and episodic memory, our store of

personal experiences. The Muses embody the former archive of

knowledge exemplified by traditional tales of heroes and sages.

Under their musical spell, the episodic memory shuts down and

our autobiographical narrative with all its “troubles and

worries” (kaka and mermêra) ceases to agitate us (Theogony, 55).

In neuropsychological terms, strongly pleasurable music

induces a flow of dopamine that excites the ventral striatum,

while the activities of the amygdala and hippocampus become

inhibited (Blood & Zatorre, 2001; Koelsch et al., 2006, 2013).

In Hesiodic terms, mnêmosunê, here corresponding specifically

to episodic memory as a continual rehearsal of kaka mediated by

the hippocampus, and mermêra, worry as a function of the

22

amygdala, both succumb to lêsmosunê, forgetfulness. While these

two structures of the limbic brain become relatively inactive,

the ventral striatum, the primary pleasure center, awakens to

what Hesiod would recognize as to terpnon, delight. “Thus the

pleasure of music may be due both to positive engagement of

brain areas related to reward and inhibition of areas mediating

negative affective states” (Zald & Zatorre, 2011:411–412).

Though, twenty-seven centuries ago, he had no notion that

the brain is where all this happens, Hesiod seems to have

grasped the basic principle that a musical performance produces

its effects through a simultaneous merging of two complementary

opposites into a multimodal dyad.

REFERENCES

Blood, A. J., and R. J. Zatorre. 2001. "Intensely Pleasurable Responses to Music Correlate with Activity in Brain Regions Implicated in Reward and Emotion". Proceedings ofthe National Academy of Sciences of the United States of America. 98 (20):11818-11823.

Brown, S., 2001. “The ‘Musilanguage’ Model of Music Evolution.” In The Origins of Music, edited by N. L. Wallin,B. Merker, and S. Brown, 271–300. Cambridge, Mass.: MIT Press

Cook, N. 2014. Beyond the Score: Music as Performance. New York:

23

Oxford University Press,

Cross, I. 2010. “Listening as Covert Performance.” Journal of the Royal Musical Association, 135 (1):67-77.

Fitch, W. T. 2006. "On the Biology and Evolution of Music." MUSIC PERCEPTION. 24 (1):85-88.

Freud, S. 1904/1917. Zur Pychopathologie des Alltagslebens. Berlin: Karger.

Hermida, J., and M. Ferreo, eds. 2010. Music Education. New York: Nova Science.

Gottfried, Jay A., ed. 2011. Neurobiology of Sensation and Reward.Boca Raton, Fl: CRC Press.

Juslin, P. N. 2013a. “From Everyday Emotions to Aesthetic Emotions: Towards a Unified Theory of Musical Emotions.” Physics of Life Reviews, 10:235–266.

Juslin P. N. 2013b. "What Does Music Express? Basic Emotions and Beyond." Frontiers in Psychology, 4:1–14.

Juslin, P.N., and P. Laukka. 2003. “Communication of Emotions in Vocal Expression and Music Performance: Different Channels, Same Code?” Psychological Bulletin 129:770–814.

Juslin, P. N. and J. A. Sloboda, eds. 2001.  Music and Emotion: Theory and Research. New York: Oxford University Press.

Juslin, P. N., and J. A. Sloboda. 2010. “The Past, Present, and Future of Music and Emotion Research.” In Handbook of Music and Emotion: Theory, Research, Applications, edited by P. N. Juslin and J. A. Sloboda, 933-955. New York: Oxford University Press.

Juslin, P. N. & J. A. Sloboda, eds. 2010. Handbook of Music and Emotion: Theory, Research, Applications. New York: Oxford University Press.

24

Juslin, P. N., and J. A. Sloboda. 2013. “Music and emotion.” In The Psychology of Music, 3rd edition, edited by D. Deutsch, 583–645. Amsterdam: Academic Press.

Koelsch, S., T. Fritz, D. Y. v Cramon, K. Müller, and A. D. Friederici. 2006. "Investigating Emotion with Music: An fMRIStudy." Human Brain Mapping,. 27 (3):239.

Koelsch, S., S. Skouras, T. Fritz, P. Herrera, C. Bonhage, M. B. Küssner, and A. M. Jacobs. 2013. "The Roles of Superficial Amygdala and Auditory Cortex in Music-Evoked Fear and Joy". NeuroImage. 81 (9/10):49-60.

MacWhinney, B. 1999. “The Emergence of Language from Embodiment.” In The emergence of language, edited by B. MacWhinney, 213-256. Mahwah, NJ: Lawrence Erlbaum, 1–38.

Marin, M. M. and J. Bhattacharya. 2010. “Music Induced Emotions: Some Current Issues and Cross-Modal Comparisons.” In Music Education, edited by J. Hermida and M. Ferreo, 1–38. New York: Nova Science.

Pinker, S. 1997/2009. How the Mind Works. New York: W. W. Norton & Company.

Reik, T. 1953. The Haunting Melody: Psychoanalytic Experiences in Life and Music. New York: Farrar, Straus & Young.

Salimpoor, V. N., and R. J. Zatorre. 2013. "Neural Interactions That Give Rise to Musical Pleasure." Psychology of Aesthetics, Creativity, and the Arts. 7 (1):62-75.

Scherer, K. R., and M. Zentner. 2001.  “Emotional Effects of Music: Production Rules.” In Music and Emotion: Theory and Research, edited by Juslin, P. N., & Sloboda J. A., Eds.). 361-392.

Schultz, W. (1998). “Predictive Reward Signal of Dopamine Neurons. “ Journal of Neurophysiology, 80:1–27.

Trost, W., T. Ethofer, M. Zentner, and P. Vuilleumier.

25

2012. "Mapping Aesthetic Musical Emotions in the Brain." Cerebral Cortex. 22 (12):2769-2783.

Zald, D. H. and R. J. Zatorre. 2011. “Music.” In Neurobiology of Sensation and Reward, edited by J. A. Gottfried, 405–428. Boca Raton, Fl: CRC Press.

Zentner, M. and T. Eerola. 2011. “Self-Report Measures andModels of Musical Emotions.” In Handbook of Music and Emotion: Theory, Research, Applications, edited by P. Juslin and J. Sloboda,187-223. New York: Oxford University Press.

Zentner, M., D. Grandjean, and K. R. Scherer. 2008. “Emotions Evoked by the Sound of Music: Characterization, Classification, and Measurement.” Emotion, 8:494–521.

26