11
Feeling-of-knowing for songs and instrumental music Brian E. Rabinovitz, Zehra F. Peynircioğlu American University, United States abstract article info Article history: Received 4 June 2010 Received in revised form 11 May 2011 Accepted 12 May 2011 Available online 8 June 2011 PsycINFO Codes: 2300 2340 2343 Keywords: Feeling of knowing Memory for songs Metamemory and music Familiarity and music Memory for music We explored the differences between metamemory judgments for titles as well as for melodies of instrumental music and those for songs with lyrics. Participants were given melody or title cues and asked to provide the corresponding titles or melodies or feeling of knowing (FOK) ratings. FOK ratings were higher but less accurate for titles with melody cues than vice versa, but only in instrumental music, replicating previous ndings. In a series of seven experiments, we ruled out style, instrumentation, and strategy differences as explanations for this asymmetry. A mediating role of lyrics between the title and the melody in songs was also ruled out. What emerged as the main explanation was the degree of familiarity with the musical pieces, which was manipulated either episodically or semantically, and within this context, lyrics appeared to serve as an additional source of familiarity. Results are discussed using the Interactive Theory of how FOK judgments are made. © 2011 Elsevier B.V. All rights reserved. A feeling-of-knowing (FOK) can arise even when an item cannot be recalled, and people can predict whether they can recall that item in the future (e.g., Hart, 1965). Such predictions have often been found to be fairly accurate, and a multitude of studies have explored the conditions under which the magnitude and accuracy of FOKs are inuenced (e.g., Koriat, 1995) Within the nonverbal domain, Peynircioğlu et al. (1998) have explored FOK for melodies and their titles and found that participants gave higher FOK ratings for titles than for melodies, but the accuracy of these ratings was greater for melodies than for titles. Interestingly, this was true only when the materials comprised instrumental music. When the materials comprised songs, although the lyrics were not presented, participants gave higher FOK ratings for melodies, and the accuracy was only marginally greater. In both cases, the cues that elicited FOK for melodies were the titles and the cues that elicited FOK for titles were the melodies. The purpose of the present study was to systematically explore three possible bases for these differences in FOK ratings between songs and instrumental pieces. One obvious difference between songs and instrumental pieces is the existence of lyrics in songs, thus providing an extra memory associate in one type of music but not the other. In addition, although melodies and titles may be learned separately, and sometimes the connection between them may be arbitrary, lyrics tend to be learned in an integrated fashion with the melody (e.g., Serane et al., 1984), and, although not very memorable by themselves, lyrics are better cues than titles for eliciting melody recall (Peynircioğlu et al., 2008). Thus, the often observed inextricability of lyrics and melodies (cf. Rubin, 1977) could induce verbal mediation in songs between the melodies and titles, resulting in the observed FOK differences. Another possibility is that songs may be simply more familiar to the participants in general than instrumental music. Differences in FOK ratings as a function of familiarity of materials have indeed been reported in the FOK literature, although the results have been somewhat mixed. Increased familiarity with the cue words, either episodically created via repetition or because of stronger preexisting semantic knowledge, has usually led to an increase in the magnitude of FOKs (e.g., Metcalfe et al., 1993; Otani and Hodge, 1991). Increased familiarity with the target words, on the other hand, has led sometimes to an increase in the magnitude of FOKs (e.g., Nelson et al., 1982), and sometimes, at least in the semantic domain, to a decrease (e.g., Peynircioğlu and Tekcan, 2000). Interestingly, in an episodic task with musical materials, even though accuracy was shown to be inuenced, the magnitudes of FOKs were not inuenced by the preexisting melodic familiarity of either the cues or the targets (Korenman and Peynircioğlu, 2004). Thus, the FOK differences Acta Psychologica 138 (2011) 7484 We wish to thank Lisa Korenman and Jennifer Thompson for their help in Experiment 1, and Igenia Flores, Uneeb Qureshi, David Baumgold, and Mitch Belkin for their help in recruiting and testing participants as well as sorting data in the remaining experiments. Corresponding author at: American University, Department of Psychology, Washing- ton, DC 20016-1023, United States. Tel.: +1 202 885 1713; fax: +1 202 885 1023. E-mail address: [email protected] (Z.F. Peynircioğlu). 0001-6918/$ see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.actpsy.2011.05.008 Contents lists available at ScienceDirect Acta Psychologica journal homepage: www.elsevier.com/ locate/actpsy

Feeling-of-knowing for songs and instrumental music

Embed Size (px)

Citation preview

Page 1: Feeling-of-knowing for songs and instrumental music

Acta Psychologica 138 (2011) 74–84

Contents lists available at ScienceDirect

Acta Psychologica

j ourna l homepage: www.e lsev ie r.com/ locate /actpsy

Feeling-of-knowing for songs and instrumental music☆

Brian E. Rabinovitz, Zehra F. Peynircioğlu ⁎American University, United States

☆ We wish to thank Lisa Korenman and JenniferExperiment 1, and Ifigenia Flores, Uneeb Qureshi, David Btheir help in recruiting and testing participants as well aexperiments.⁎ Corresponding author at: American University, Depar

ton, DC 20016-1023, United States. Tel.: +1 202 885 171E-mail address: [email protected] (Z.F. Peynircio

0001-6918/$ – see front matter © 2011 Elsevier B.V. Adoi:10.1016/j.actpsy.2011.05.008

a b s t r a c t

a r t i c l e i n f o

Article history:Received 4 June 2010Received in revised form 11 May 2011Accepted 12 May 2011Available online 8 June 2011

PsycINFO Codes:230023402343

Keywords:Feeling of knowingMemory for songsMetamemory and musicFamiliarity and musicMemory for music

We explored the differences between metamemory judgments for titles as well as for melodies ofinstrumental music and those for songs with lyrics. Participants were given melody or title cues and asked toprovide the corresponding titles or melodies or feeling of knowing (FOK) ratings. FOK ratings were higher butless accurate for titles with melody cues than vice versa, but only in instrumental music, replicating previousfindings. In a series of seven experiments, we ruled out style, instrumentation, and strategy differences asexplanations for this asymmetry. Amediating role of lyrics between the title and the melody in songs was alsoruled out. What emerged as themain explanation was the degree of familiarity with themusical pieces, whichwas manipulated either episodically or semantically, and within this context, lyrics appeared to serve as anadditional source of familiarity. Results are discussed using the Interactive Theory of how FOK judgments aremade.

Thompson for their help inaumgold, andMitch Belkin fors sorting data in the remaining

tment of Psychology, Washing-3; fax: +1 202 885 1023.ğlu).

ll rights reserved.

© 2011 Elsevier B.V. All rights reserved.

A feeling-of-knowing (FOK) can arise even when an item cannotbe recalled, and people can predict whether they can recall that itemin the future (e.g., Hart, 1965). Such predictions have often been foundto be fairly accurate, and a multitude of studies have explored theconditions under which the magnitude and accuracy of FOKs areinfluenced (e.g., Koriat, 1995)

Within the nonverbal domain, Peynircioğlu et al. (1998) haveexplored FOK for melodies and their titles and found that participantsgave higher FOK ratings for titles than for melodies, but the accuracyof these ratings was greater for melodies than for titles. Interestingly,this was true only when the materials comprised instrumental music.When the materials comprised songs, although the lyrics were notpresented, participants gave higher FOK ratings for melodies, and theaccuracy was only marginally greater. In both cases, the cues thatelicited FOK for melodies were the titles and the cues that elicited FOKfor titles were the melodies. The purpose of the present study was tosystematically explore three possible bases for these differences inFOK ratings between songs and instrumental pieces.

One obvious difference between songs and instrumental pieces isthe existence of lyrics in songs, thus providing an extra memoryassociate in one type of music but not the other. In addition, althoughmelodies and titles may be learned separately, and sometimes theconnection between them may be arbitrary, lyrics tend to be learnedin an integrated fashion with the melody (e.g., Serafine et al., 1984),and, although not very memorable by themselves, lyrics are bettercues than titles for eliciting melody recall (Peynircioğlu et al., 2008).Thus, the often observed inextricability of lyrics and melodies (cf.Rubin, 1977) could induce verbal mediation in songs between themelodies and titles, resulting in the observed FOK differences.

Another possibility is that songs may be simply more familiar tothe participants in general than instrumental music. Differences inFOK ratings as a function of familiarity of materials have indeed beenreported in the FOK literature, although the results have beensomewhat mixed. Increased familiarity with the cue words, eitherepisodically created via repetition or because of stronger preexistingsemantic knowledge, has usually led to an increase in the magnitudeof FOKs (e.g., Metcalfe et al., 1993; Otani and Hodge, 1991). Increasedfamiliarity with the target words, on the other hand, has ledsometimes to an increase in the magnitude of FOKs (e.g., Nelsonet al., 1982), and sometimes, at least in the semantic domain, to adecrease (e.g., Peynircioğlu and Tekcan, 2000). Interestingly, in anepisodic task with musical materials, even though accuracy wasshown to be influenced, the magnitudes of FOKs were not influencedby the preexisting melodic familiarity of either the cues or the targets(Korenman and Peynircioğlu, 2004). Thus, the FOK differences

Page 2: Feeling-of-knowing for songs and instrumental music

75B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

observed between songs and instrumental music might be, at least tosome extent, a function of differences in either already existing orepisodically created familiarity with the materials.

Finally, in the Peynircioğlu et al. (1998) study, because FOK ratingsfor instrumental music and songs were examined in separateexperiments, the results could also have been an artifact of testingonly one type of music at a time. For any number of reasons, includingthe presence or absence of lyrics or differences in the familiarity of thematerials, or even the setting up of expectations for future piecesthrough hearing music only of a certain type, participants could haveadopted different strategies for songs than for instrumental pieces.

In the present study, the purpose of Experiment 1 was to rule out asimple explanation based on the adoption of different strategies as afunction of type of material. We replicated the Peynircioğlu et al.(1998) study, but with both songs and instrumental pieces presentedin the same list and in a mixed fashion. The purpose of the next sixexperiments was to explore the effects of variations in familiarity aswell as the presence or absence of lyrics as explanations for theobserved differences in FOK ratings.

In Experiment 2, we presented both types of music in both singlemelody line and complete orchestral formats to equalize the soniccomplexity of the materials; the purpose was to minimize thedifferences in initial recognizability and the amount of availablemusical information which could have influenced perceptual famil-iarity or otherwise affected memory processing. In addition, withsongs, we tested whether having the title cue as a part of theunpresented lyrics made a difference in order to look at verbalmediation. In Experiment 3, we presented only songs, some withlyrics actually present and somewithout, to look at the role of lyrics inmore detail. In Experiments 4 and 5 we tested episodic memory. Wecreated new pieces so that the identical novel melody couldbe assigned lyrics or not while familiarity was kept constant(Experiment 4), and familiarity could be manipulated through therepetition of the novel melodies (Experiment 5). Finally, in Experi-ments 6 and 7, we switched back to the semantic domain and variedboth familiarity and the existence of lyrics in the same experiments. InExperiment 6, we had similar genre TV themes equated in terms ofalready existing familiarity but some of which had lyrics and othersnot; and within each group, we also had both high and low familiarpieces. In Experiment 7, we tested participants who were morefamiliar with instrumental classical music as well as participants whowere more familiar with current pop songs so that we could pit thefamiliarity explanation against the lyric mediation as well as the type

Table 1Summary of the experiments: a comparison of the types of participants, types of audio stimuthe seven experiments.

Experiment

1 2 3

Type ofparticipants

College students College students Collestude

Nature ofsound

Single-line piano for songsOriginal recordings for instrumental pieces

Midi generatedsingle line and fullscores for both songsand instrumentalpieces

Midifull ssonghalf wsungfema

Main variable Replicating differences between songs andinstrumental pieces within sameexperiment

ReplicatingExperiment 1 whilecontrolling fordifferences in thenature of the soundfound inExperiment1

Exameffectpreselyrics

of music explanations directly while using exactly the samematerials.A summary of the experiments are presented in Table 1.

Experiment 1

Method

Participants were 60 American University students who receivedextra credit in psychology classes.

A total of 240 excerpts of music were selected after piloting formedium familiarity; that is, we excluded all pieces that wereimmediately identified or were claimed to be novel. In the pilot test,participants from the same pool as those who participated in all of thepresent experiments heard several hundred excerpts andwrote downwhether they knew what piece it was from; if they could notremember what piece it was from, they rated how familiar it soundedto them on a scale of 1–10 where 1 was “I am sure I have never heardthis melody before” and 10 was “I am very familiar with this melody”.Any pieces whose names were written down by the majority of ourpilot participants as well as any pieces that received mean ratings ofbetween 9 and 10 as well as between 1 – 3 were excluded. Thepurposewas to avoid floor and ceiling effects for outright recall and beable to look at FOK judgments. Half were songs that originally hadlyrics, although lyrics were never presented to participants, and halfwere instrumental pieces. The songs were from a variety of genres,such as popular music, folk music, and television themes. They wereplayed on a Korg X5 keyboard, using the same piano setting for eachexcerpt. The excerpts from instrumental pieces were from originalrecordings and taken from classical music, film and television themes.All excerpts were an average of 10 s long (ranging from 2 to 20 s),recorded onto TDK audiotapes, and played on an AWIA NSX-V20stereo system

Of the 120 songs as well as of the 120 instrumental pieces, 40 weredesignated as targets during the recall/judgment phase, and theremaining 80 were designated as lures in the recognition test. Thelures for each target were chosen based upon their similarity so thatthe target would not be salient in comparison.

There were two mixed-presentation lists (Lists 1 and 2),comprising 40 items each (20 songs and 20 instrumental pieces).There were two versions of each list, titles of the excerpts writtenon a sheet of paper (title cue) or melodies played on an audiocassette (melody cue). The two lists were always presented in thesame order, but for half of the participants, List 1 was used for the

li, and the main variable of interest with respect to metamemory judgments in each of

4 5 6 7

gents

Collegestudents

Collegestudents

College students College studentsmore familiar withclassical music andstudents morefamiliar with currentpopular music

generatedcores ofmelodies,ith lyricsby ale

Midi generatedfull scores oforiginal pieces,half with lyricssung by afemale

Same asExperiment4

Midi generatedfull scoretelevision themes

Midi generated full-score versions ofclassical and popularpieces

ining theof thence ofin songs

Comparison ofsongs andinstrumentalpieces in anepisodic task

Exploringthe effect offamiliaritywithin anepisodictask

Examiningdifferencesbetween song andinstrumentalpieces whenequated forfamiliarity

Examiningdifferences infamiliarity by lookingat two differentpopulations with thesame materials

Page 3: Feeling-of-knowing for songs and instrumental music

76 B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

title cue condition (probing memory for melody) and List 2 wasused for the melody cue condition (probing memory for title), andvice versa for the other half.

Participants were tested individually. In the melody cue condition,they were told that they would hear a series of melodies. After eachmelodywas presented, participants were asked to write down its title.Spellingmistakes and omission or distortion of inconsequential words(such as “the” or “a”) were ignored. If they could not remember thetitle, they were asked to provide an FOK rating on a 5-point scale (1corresponding to low and 5 to high FOKs). Thus, they were told thatafter hearing the melody of a piece, if they really knew the title butwere blocking on it for the moment and would remember it for sure ifgiven a cue or had to choose between alternatives, they should give a5; if they were sure that they did not know what the title of theparticular melody was, they should give a 1; and if they thought theymight remember the title later on or when given cues or alternativesto choose from, they should give a 2, 3, or 4, depending on thestrength of their confidence in their FOK. Similarly, in the title cuecondition, they were told that they would see a series of printed titles.In response to each title, they were asked to hum the beginningmelody of the piece. These responses were recorded both by a taperecorder to be analyzed later and by the experimenter as correct orincorrect without feedback to the participants. The humming wasscored as correct if the melody was unambiguously recognizable asbelonging to the piece at hand; that is, off-key humming notwith-standing, if the contour, the specific intervals, and rhythm werelargely correct, participants were presumed to have recalled themelody. They also had to hum at least as much of the piece as wewould present in the music cue condition. This was the case for bothsongs and instrumental pieces, and in neither condition were thereany indeterminate instances; humming was either unambiguouslycorrect or unambiguously incorrect. If they could not remember themelody, they were asked to provide an FOK rating instead, regardingthe strength of their subjective feeling for whether they really knewthis melody or not. In both conditions, participants were given 20 s foreach item to provide an answer or an FOK rating.

A 40-item recognition test followed the presentation of each list.After the melody cue list, the test involved presenting a melody cueagain and three possible titles for each item. The participants' task wasto circle the correct alternative. For example, if the participant heardthe excerpt from “Misty”, then the recognition test offered thechoices: A) Misty B) Unforgettable C) Happiness Is. After the title cuelist, the test involved presenting a title again and playing threedifferent excerpts represented by A, B, and C. Thus, for the title“Misty”, excerpt A would be from “Misty”, excerpt B would be from“Unforgettable”, and excerpt C would be from “Happiness is”. In bothconditions participants were allowed 5 s to make their decision.

Results

The results are summarized in Table 2. In all experiments we setsignificance at pb0.05 andnon-significance at pN0.10, and our post-hocpairwise comparisons included Bonferroni corrections. Thus, we reportp-values only for “marginally significant” results and for nonparametrictests in Experiments 4 and 5. Otherwise, all results reported as showing

Table 2Mean percentage recalled correctly, mean FOK ratings (1–5) and accuracy (gamma correlatfunction of type of cue in Experiment 1(SDs are in parentheses).

Type ofcue

Recall FOK ratings

Song Instrumental Overall Song I

Melody 15 (13) 13 (11) 14 2.35 (0.72) 2Title 20 (15) 10 (13) 15 2.39 (0.81) 2Overall 18 12 2.37 2

effects should be assumed to have p-values of less than 0.05 and allresults reported to be showing no effects should be assumed to have p-values of greater than 0.10. A repeated measures ANOVA showed aneffect of music type on recall, F(1,59)=16.02, MSE=0.01, but no effectof cue type, F(1,59)=1.42, MSE=0.01, as measured by the proportionof items (titles or melodies) participants were able to rememberoutright and did not have to give an FOK rating for. However, there wasalso an interaction between cue type and music type, F(1,59)=6.47,MSE=0.01. The main effect of music type was largely due to thedifference in melody recall; post-hoc comparisons revealed thatwith title cues, more song melodies were recalled than instru-mental melodies, t(59)=4.17 but there was no differencebetween the two types of music with melody cues, t(59)=1.19.Also, whereas with songs participants recalled more when given atitle cue compared to a melody cue, t(59)=2.85, the same was nottrue with instrumental pieces—in fact, although not reachingsignificance, the results were in the opposite direction.

In contrast to the recall results, music type had no effect on themagnitude of FOK ratings, varying between 1 and 5, F(1,59)=1.06,MSE=0.43,whereas cue type did, F(1,59)=16.56,MSE=0.27. Therewasagain an interaction between cue type and music type, F(1,59)=21.43,MSE=6.11. Post-hoc comparisons showed that melody cues elicitedhigher FOKs for instrumental pieces than for songs, t(59)=6.27, whereastherewas no differencewith title cues, t(59)=0.50. In addition, andmoreimportantly, melody cues elicited higher FOK ratings than titlecues in the case of instrumental music, t(59)=3.58, whereas titlecues elicited marginally higher FOK ratings than melody cues inthe case of songs, t(59)=2.23.

Accuracy of the FOK ratings was measured using the Goodman–Kruskal gamma scores, which reflect the correlation between thestrength of the FOK ratings and the recognition performance.Correct eventual recognition of items preceded by high FOKratings as well as incorrect eventual recognition preceded by lowFOK ratings would lead to high gamma scores and incorrecteventual recognition preceded by high FOK ratings as well as correcteventual recognition preceded by low FOK ratings would lead to lowgammascores. Just aswith FOKmagnitudes, therewasnoeffect ofmusictype on the accuracy of FOK ratings, F(1,59)=0.66, MSE=0.15, buttherewasaneffect of cue type, F(1,59)=10.11,MSE=0.14. All accuracyscores for Experiment 1 were above chance levels, all ts(59)N4.95.Participants were more accurate with title cues compared to melodycues especially in the case of instrumental pieces, t(59)=3.32—thesame finding was not significant with songs, t(59)=0.98, althoughtherewas no interaction between cue type andmusic type, F(1,59)=1.23, MSE=0.22.

Interestingly, with instrumental pieces, even though the FOKswere stronger with melody cues than with title cues, they were lessaccurate. Because inaccuracies can arise either because participantsgive high FOK ratings but not recognize the items or because they givelow FOK ratings but do recognize the items (i.e., be overconfident orunderconfident), calibration curves can help identify the source of theinaccuracies (cf. Lichtenstein et al., 1982). Calibration curves in Fig. 1show that, compared to the other conditions, only melody cues withinstrumental music resulted in lower recognition scores at higher FOKratings, indicating that the familiarity of the melodies from

ions) in memory and metamemory performance for songs and instrumental pieces as a

FOK accuracy

nstrumental Overall Song Instrumental Overall

.75 (0.76) 2.55 0.39 (0.49) 0.28 (0.43) 0.34

.16 (0.69) 2.28 0.47 (0.41) 0.50 (0.36) 0.49

.46 0.43 0.39

Page 4: Feeling-of-knowing for songs and instrumental music

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 2 3 4 5

FOK

Pro

po

rtio

n R

eco

gn

ized

instrumental title cue

song title cue

instrumental melody cue

song melody cue

Fig. 1. Calibration curves for Experiment 1.

77B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

instrumental pieces artificially raised participants' FOK ratings fortheir less familiar title counterparts. Because the participants were notinstructed to treat the FOK ratings as an interval scale, in our figures, wehave omitted the diagonal line corresponding to perfect calibration; butthe curves can still be compared to each other ordinally.

Overall, the findings of Peynircioğlu, et al. (1998) were replicatedeven when both types of music were presented in the same list and ina mixed fashion. Thus, it appeared that the previous metamemorydifferences observed between songs and instrumental pieces werenot due to adopting differing strategies for differing types of music, atleast not as an artifact of the testing procedure.

Experiment 2

The songs and instrumental pieces were substantially different inquality, both in terms of genre and instrumentation. The instrumentalpieces, comprising full orchestral versions obtained from originalrecordings, contained more musical information in the form ofexplicit harmony and varying timbres of the multiple instrumentswhen compared to the songs that were presented as a single melodyline using the piano setting on a musical synthesizer. Complexity ofstimuli affects memory and metamemory processes for both verbal(e.g., Wang, 1990) and melodic (e.g., Peynircioğlu, 1995) materials.Thus, it could have been this additional musical information withinthe instrumental pieces that led to stronger FOKs for their titles thanfor their melodies, either through enabling easier recognition offamiliarity (cf. Metcalfe et al., 1993) or through more chances forelaboration. To explore this possibility, in Experiment 2, all excerptswere generated by a computer and presented in both an orchestraland single line format so as to equalize the level of available sonicinformation.

We also looked at whether the presence of some of theunpresented lyrics in the title cue made a difference in the case ofsongs. It has already been shown that the presence of words from atitle within a cue is the strongest predictor of remembering that title(Hyman and Rubin, 1990). Thus, if mediation by the unpresentedlyrics were at least partly responsible for the previously observeddifferences, then we might predict greater effects if the title wasindeed a subset of the unpresented lyrics.

Finally, we changed our criterion test used to gauge the accuracy ofFOK ratings to a cued recall test. Participants were asked to continuethe musical phrases or titles (cf. Gruneberg and Monks, 1974;

Peynircioğlu, et al., 1998). The purpose was to reduce noise byremoving simple guessing and the possible use of a process ofelimination strategy to get the correct answer.

Method

Participants were 64 American University students whoreceived extra credit in psychology classes. None had participatedin Experiment 1.

We used 96 excerpts of computer generated MIDI music, whichwere an average of 10 s long (ranging from 1.8 to 25 s). Halfcomprised songs and half instrumental pieces; most were takenfrom the material sets used in Experiment 1. Of the 48 songs, 24 hadthe title of the song originally embedded in the section played (e.g.,“Moon River”), and 24 did not (e.g., “Love Story”). The MIDI files weregathered from the Internet and were all public domain files. Theywere played using Anvil Studio software through the MIDI output of aSoundBlaster Live! card and recorded directly onto TDK audiotapesthat were then played on an AWIA NSX-V20 stereo system.

There were two versions of all excerpts: a full and a single-lineformat version. The instrumentation of the full formats was made assimilar as possible to the original recordings (e.g., an excerpt from aMozart symphony had all the instrumental lines playing as one wouldhear at an orchestral performance). The single-line formats comprisedonly the main melody played on a piano setting.

The excerpts were randomly arranged into two lists of 48 (Lists 1and 2), within the constraint that each list contained half songs andhalf instrumental pieces and that the genres of the pieces were evenlydistributed in each list. Each list was subdivided into 4 blocks. Eachblock contained 12 pieces of only one type of music and each songblock was arranged such that the first six pieces were those where thetitle words were originally sung during that portion of the excerpt andthe second six were those where they were not. The song andinstrumental music blocks were presented in an alternatingfashion. Within each list, one block of either type of musiccontained just single-line formats and the other block containedjust full formats. Four groups of participants were given List 1 inthe melody cue condition and List 2 in the title cue condition, withthe order of the blocks and the pieces within each blockcounterbalanced among the four groups. Four other groups weregiven List 1 in the title cue condition and List 2 in the melody cuecondition. Thus, across 8 groups of participants, each excerpt wastested equally often in the title and melody cue conditions as wellas the full and single-line format conditions.

For the criterion test phase, a partial version of each melodyexcerpt, which was the same as the presentation version exceptapproximately half as long, was prepared. Participants were giveneach excerpt and asked to hum the rest of the melody. Again, recallwas scored as correct if the continuation was unambiguously of thespecific piece at hand with respect to contour, intervals, andrhythm. Partial versions of the titles were prepared with individualvariations to give just enough information to be helpful to onewho already knew the answer, but not enough information toreveal the answer if one did not know it (e.g., “D _ _ _ _ _ _ _” for“Downtown”, or Beethoven _____________ for Beethoven 9thSymphony). Participants were given 10 s to complete each item.The remaining procedure was the same as in Experiment 1.

Results

The results are summarized in Table 3. A repeated measuresANOVA revealed an effect of music type on recall, F(1,63)=104.88,MSE=0.03. Participants recalled more melodies for songs than forinstrumental pieces in the title cue condition, t(63)=9.63, as well asmore titles for songs in the melody cue condition, t(63)=12.09; thus,the differences reached significance also in the melody cue condition.

Page 5: Feeling-of-knowing for songs and instrumental music

Table 3Mean percentage recalled correctly, mean FOK ratings (1–5) and FOK accuracy (gamma correlations or accuracy scores) in memory and metamemory performance for single-line(line) and full instrumentation versions of songs and instrumental pieces as a function of type of cue in Experiment 2 (SDs are in parentheses).

Recall FOK ratings FOK accuracy

Song Instrumental Song Instrumental Song Instrumental

Line Full Mean Line Full Mean Line Full Mean Line Full Mean Line Full Mean Line Full Mean

Cue typeMelody 26

(16)33(20)

29 9(11)

19(31)

11 2.66(0.69)

2.87(0.81)

2.75 3.04(0.69)

3.13(0.78)

3.08 2.66(3.74)

2.34(3.03)

2.50 0.44(4.12)

0.52(4.14)

0.48

Title 23(16)

21(20)

22 5 (8) 5 (7) 5 2.62(0.86)

2.59(0.91)

2.55 2.17(0.81)

2.24(0.67)

2.22 3.83(3.34)

4.61(3.55)

4.22 4.83(3.09)

3.58(3.04)

4.21

Overall 25 27 7 12 2.64 2.73 2.61 2.69 3.25 3.48 2.64 2.05

78 B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

Perhaps because the instrumentation between the two types of musicwas equalized, the instrumental melodies no longer held a sonicadvantage, which enabled the differences to emerge in full force.There was also an effect of cue type, F(1,63)=29.01, MSE=0.03.Participants recalled more titles when given a melody cue thanmelodies when given a title cue, for both songs, t(63)=4.08,and instrumental pieces, t(63)=6.22. There was no interactionbetween cue type and music type, F(1,63)=0.58, MSE=0.02.Recall was greater for songs regardless of cue type, suggestingthat, not surprisingly, the association between title and melodywas stronger for songs than for instrumental pieces. Recall wasalso greater with melody cues than with title cues, regardless ofmusic type, suggesting that the memory associations betweentitles and melodies were asymmetrical, with a stronger link inthe melody to title direction than in the title to melody direction(cf. Peynircioğlu et al., 2008).

Full instrumentation tended to lead to better recall, F(1,63)=5.44,MSE=0.03. There was also an interaction between cue type andinstrumentation, F(1,63)=14.88, MSE=0.02, with full instrumenta-tion leading to better performance with melody cues for bothinstrumental pieces and songs. There was no interaction betweeninstrumentation and music type, F(1,63)=0.90, MSE=0.02, andthere was, of course, no effect of instrumentation with title cues sinceno melodies were present.

With respect to themagnitude of FOK ratings, previous results werealso generally replicated. There was no effect of music type, F(1,63)=0.41, MSE=0.48, but there was an effect of cue type, F(1,63)=70.43,MSE=0.49, as well as an interaction between cue type and music,F(1,63)=37.12, MSE=0.46. Melody cues elicited higher FOKs forinstrumental pieces than for songs, t(63)=4.10, and title cueselicited higher FOKs for songs than for instrumental pieces, t(63)=3.67. In addition, melody cues also elicited higher FOK ratings thantitle cues in the case of instrumentalmusic, t(63)=10.49, but, unlikein Experiment 1, they also elicited marginally higher FOK ratings thantitle cues in the case of songs, t(63)=2.15, pb0.10. When everythingwas in the same MIDI format, melody cues elicited higher ratingsregardless of the type ofmusic, although this differencewas still greaterwith instrumental pieces.

More importantly, there was no main effect of instrumentation,F(1,63)=2.50, MSE=0.37, nor an interaction between instrumen-tation and cue type, F(1,63)=2.62, MSE=0.21, instrumentationand music, F(1,63)=0.02, MSE=0.26, or all three variables, F(1,63)=1.37, MSE=0.20. Full instrumentation was not a significant enoughsource of variation to influence FOK judgments.

With respect to the accuracy of FOK ratings, gamma correla-tions could not be used because a high percentage of participantshad no correct answers for at least one of the four possiblecategories on the criterion test. Thus, the number of correctcontinuations with low FOK ratings (1 or 2) was subtracted fromthe number of correct continuations with high FOK ratings (4 or5); similarly, the number of incorrect continuations with low FOKratings was subtracted from the number of incorrect continuations

with high FOK ratings. These two numbers were then addedtogether to generate an accuracy score that took into account bothcorrect and incorrect responses on the criterion test. The higherthe number, the more accurate a participant was.

There was a main effect of music type, F(1,63)=13.72,MSE=9.70, but not of instrumentation, F(1,63)=0.49, MSE=8.02.Accuracy was higher for songs than instrumental pieces, t(63)=3.99,but not for full instrumentation over single-line versions. Therewas alsoan effect of cue type, F(1,63)=98.31, MSE=9.65. Accuracy was higherwith title cues compared to melody cues for both songs, t(63)=5.61,and instrumental pieces, t(63)=6.39, although there was no differencebetween single-line and full instrumentation in any of thesecomparisons. There was also an interaction between cue type andmusic, F(1,63)=11.33, MSE=11.39. Participants were more accu-rate in their predictions of future song recall than future instrumentalpiece recall with melody cues, t(63)=3.98; in fact, predictions withmelody cues with instrumental pieces were the only ones not abovechance level (t(63)=0.99) while all of the others were (all ts (63)N7.12). As expected,with a title cue therewas nodifference, t(63)=0.91.Therewere no interactions betweenmusic type and instrumentation,F(1,63)=2.23, MSE=9.66, or between cue type and instrumentation,F(1,63)=0.06, MSE=7.05.

Thus, the sonic qualities of music did matter in memory, withfull instrumentation aiding in the recall of instrumental pieces to agreater degree compared to songs. In metamemory, however,these qualities appeared to play at most a minor role, and themain findings of Experiment 1 were replicated. The only notabledifference was that when songs were also presented in fullinstrumentation, participants were more accurate with them thanwith the instrumental pieces.

A secondary purpose was to see if, in the case of songs, strongereffects would emerge if titles contained some of the lyrics from thepresented excerpts, especially in the title recall condition. The presenceof these “virtual titles” indeed had an effect on recall, F(1,63)=66.56,MSE=0.01, but not on either the magnitude or the accuracy of theFOK judgments, F(1,63)=1.86, MSE=0.33, and F(1,63)=0.28,MSE=5.04, respectively, thus posing a problem for the verbalmediation explanation. That is, if it were the lyrics that helped topromote the FOKs for the titles of songs, for instance by providingsemantic information, then one would have predicted that their actualpresence in the titleswouldhavemade themeven stronger asmediatingcues and led to higher FOK ratings or better accuracy; but that wasnot found.

Experiment 3

Experiment 3 focused on theverbalmediationexplanation. All of thematerialswere songs, half ofwhichwere heardwithout their lyrics, as inthefirst two experiments, and half ofwhichwere heard alongwith theirlyrics. In light of the results from Experiment 2, none of the lyrics of thetarget excerpts contained the title. The question was whether therewould emerge any differences between the lyrics present and lyrics

Page 6: Feeling-of-knowing for songs and instrumental music

79B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

currently absent conditions which might be reminiscent of thedifferences between songs without lyrics present and instrumentalpieces that had emerged in the previous experiments.

Method

Participants were 40 American University students who receivedextra credit in psychology classes. None had participated inExperiment 1 or 2.

A total of 32 songs were selected from the pool of 120 songs inExperiment 1. They were randomly assigned to either a “sung” (lyricspresent) orMIDI-only (lyrics absent) condition. MIDI-only songswereidentical to the full MIDI arrangements in Experiment 2, whereas“sung” songs were combinations of the full MIDI arrangements and afemale voice singing the lyrics. This list was divided in half, with thefirst 16 songs assigned to the melody cue condition and the latter 16to the title cue condition. These assignments were reversed for asecond group of participants; songs assigned to the “sung” and MIDI-only conditions were also switched across two further subgroups. Themethod was otherwise identical to that of Experiment 2.

Results

The results are summarized in Table 4. A repeated measuresANOVA showed no effect of lyric presence on recall, F(1,39)=0.00,MSE=0.01, for either the melody or the title cue condition, ts(39)=0.46 and 1.51, respectively. Although this was not surprising in thetitle cue condition because no music was presented in the recallphase, the result in the melody cue condition suggested that thecombination of lyrics and melody did not aid participants' recall anymore than the melody itself (cf. Peynircioğlu, et al., 2008). Mirroringthe results of the previous two experiments, participants recalledmore titles with a melody cue than they did melodies with a title cue,F(1,39)=14.43, MSE=0.03. There was, however, no interactionbetween cue type and lyric presence, F(1,39)=2.19, MSE=0.01.Thus, the actual presence of lyrics did not seem to substantiallyenhance memory for the title.

The magnitude of the FOK ratings were also not influenced by thepresence of lyrics, F(1,39)=0.69, MSE =0.48, neither with melodynor with title cues, ts(39)=0.03 and 1.08, respectively. Again, thiswas not surprising in the title cue condition, but even in the melodycue condition the physical introduction of lyrics had no effect on FOKmagnitudes. Also, although there was the expected effect of cue type,F(1,39)=4.25, MSE=0.59, there was no interaction between cuetype and lyric presence, F(1,39)=0.22, MSE=0.40.

The accuracy of the FOK ratings was measured by the alternativeaccuracy scores used in Experiment 2 andwere above chance levels inall cases, all ts(39)N5.13. Again, as expected, accuracy was greater inthe title than in the melody cue condition, F(1,39)=6.76, MSE=6.06.Mirroring the recall and FOK magnitude results, there was noeffect of lyric presence, F(1,39) =1.24, MSE =4.26, neither in themelody nor in the title cue condition, ts(39)=0.45 and 1.02,

Table 4Mean percentage recalled correctly, mean FOK ratings (1–5), and accuracy scores inmemory and metamemory performance for songs with (SL) and without (SN) lyricspresented as a function of type of cue in Experiment 3 (SDs are in parentheses).

Recall FOK ratings FOK accuracy

SL SN Mean SL SN Mean SL SN Mean

Cue typeTitle 11

(13)14(17)

12 2.27(0.74)

2.40(0.80)

234 3.53(2.59)

2.95(2.26)

3.24

Melody 23(18)

21(16)

22 2.56(0.78)

2.61(0.75)

2.59 2.30(2.75)

2.15(2.65)

2.23

Overall 17 18 2.42 2.51 2.92 2.55

respectively. There was also no interaction between cue type andlyric presence, F(1,39)=0.28, MSE=6.50.

One possible inference is that lyrics do not play much of amediating role in the observed differences between songs andinstrumental melodies when one needs to access titles frommelodiesor vice versa. Another possibility is that lyrics are so deeply embeddedwithin memories for songs that their actual presence or absence isimmaterial. For similar reasons, they may not increase the familiarityof their melodies, either (cf. Peynircioğlu, et al., 2008), and thus maynot play a role even if the observed differences under exploration aredue to differences in familiarity and not verbal mediation per se.

Experiment 4

In Experiments 1 and 2, the preexisting semantic familiarity of thepieces was an unavoidable confound, and some participants couldhave been more familiar with songs than with instrumental pieces orvice versa. In Experiment 4, we used an episodic task and novelmaterials to equalize this familiarity. That is, we presented excerptsthat none of the participants had heard before so that no one excerptcould be more familiar than another; and to be able to gauge memoryand metamemory for these excerpts that did not have any semanticrepresentations, we presented them to the participants before the testthus making the task one of remembering based simply on the recentpresentation episodes of these items (e.g., Tulving, 1972, 1983).Another possible confound in the previous experiments was thepossibility of differences in the nature of the two types of music. Forinstance, perhaps the melody lines in songs were more salient andother aspects less complex, or perhaps songs relied on the messageswithin the lyrics to keep the listener engaged, whereas greatervariation in melodic lines were needed in instrumental pieces. Byusing novel materials and presenting the same melodies in song orinstrumental versions, we also controlled for such possible “genre”effects. Another aspect of the genre differences was the salience oftitles in the two types of music. Often, the titles of songs tend to bemore informative, and “catchier” than those of instrumental pieces(e.g., “Yellow Submarine” versus “Symphony No. 40”), and it has beenshown that more elaborate and concrete musical titles do aid inmemory for music (e.g., Hiraoka and Umemoto, 1981). In the presentstudy, the titles were kept the same in the two conditions.

Method

Participants were 64 American University students who receivedextra credit in psychology classes. None had participated in theprevious experiments.

A total of 32 excerpts of original pieces were created. The melodieswere designed to be memorable but not vary drastically in mood andstyle from each other in order to avoid ceiling effects, and theirlengths (mean of 8 s with a range from 6 to 14 s) were similar to thoseof the excerpts used in the previous experiments. Unique lyrics andtitles were subsequently generated for each piece. Lyrics weredesigned to be similar in nature to those of songs participants hadheard before, both topically and lexically. In addition, the title wasnever present within the lyrics, nor was there an explicit relationshipbetween the title and the lyrics so that the lyrics would notimmediately cue the title by virtue of subject matter alone. In fact,all titles were common animal names. For example, for the songentitled “Mosquito” the lyrics were “I had hoped for so much more,but it seems that was not to be.” All songs were transformed intoinstrumental pieces simply by removing the lyrics but keeping thesame titles. Two lists of 16 items were presented in the same order toall participants; however, for half of the participants List 1 served inthe melody cue condition and List 2 in the title cue condition, and viceversa for the other half.

Page 7: Feeling-of-knowing for songs and instrumental music

80 B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

The procedure was similar to those of the previous experimentsexcept that a study phase preceded the recall/judgment phase, and atape recording of the 16 pieces was played, with a spoken titlepreceding each piece, with approximately 5 s of silence in betweeneach piece. Participants were told to pay attention to both the pieceand the title because a memory test would follow and they would beasked to remember both elements. There were four types ofstudy/recall-test manipulations: Study instrumental versions/testinstrumental versions (I/I group), study song versions/test songversions (S/S group), study instrumental versions/test songversions (I/S group), and study song versions/test instrumentalversions (S/I group).

A 4-item forced-choice recognition test served as the criterion testbecause of the episodic nature of this experiment and the increasedlikelihood of floor effects in a cued-recall test. In the melody cuecondition, the alternative titles on the recognition test were selectedfrom other pieces in the same list. This was done because any titles notfrom the previously studied list would have been immediatelyapparent to the participants, and they would have simply selectedthe familiar title without trying to remember whether it was the titlefor the melody in question. Similarly, in the title cue condition, thealternative melodies on the forced choice recognition test were takenfrom the other melodies within the tested list.

Results

The results are summarized in Table 5. The melody cue and titlecue conditions were examined separately because of the lack ofvariance in the title cue responses. For both the melody cue and thetitle cue a Kruskal–Wallis one-way analysis of variance wasconducted. For the melody cue the group manipulation (that is, thetype of material that was learned and tested) had a marginal effect onrecall, χ2(3, N=64)=7.34, p=0.06, whereas for the title cue thegroup manipulation had no effect, χ2(3, N=64)=2.03. The latterfinding was not surprising because of a floor effect in the title cuecondition. Consistent with the encoding specificity principle(Tulving & Thomson, 1973), a Kruskal–Wallis test comparingrecall in the congruent and incongruent conditions (S/S and I/I vsI/S and S/I) showed a difference in the melody cue condition, χ2(1,N=63)=5.38; indeed, Mann–Whitney tests revealed that thedifference between the S/S group and the S/I group as well as thatbetween the S/S group and the I/S group were significant. Whenthe conditions at test matched those at encoding with respect tothe presence or absence of lyrics, performance was enhanced.

More importantly, the performance in the S/S and I/I groups werenot different; thus, just-learned songs with lyrics were not betterremembered than their instrumental counterparts when melodieswere equated in terms of genre and familiarity (at least whenfamiliarity was quite low as was the case in this experiment).However, worthy of note is that, unlike in the S/S group, performancein the I/I group also did not differ from those in S/I or I/S groups,suggesting that there was something different about learning these

Table 5Mean percentage recalled correctly, mean FOK ratings (1–5), and accuracy scores inmemorytype of cue and type of study and test combination in Experiment 4 (SDs are in parenthese

Recall FOK ratings

I/I S/S I/S S/I Mean I/I S/S I/S

Cue typeMelody 18 (14) 23 (16) 13 (10) 10 (10) 16 2.81

(0.69)2.84(0.53)

2.61(0.64)

Title 0 (1) 0 (1) 0 (0) 0 (0) 0 2.94(0.49)

2.56(0.58)

2.75(0.49)

Overall 9 12 7 5 2.88 2.70 2.68

pieces as instrumental pieces compared to learning them as songs.Perhaps when learned as songs, aspects of the lyrics and the vocaltimbre were also encoded, and their absence in the recognition phasehurt recall of the associated title. When learned as instrumentalpieces, on the other hand, the additional information in the form oflyrics at test could easily be ignored. Thus, in episodic memory, thepresence of lyrics appeared to serve as relevant information duringencoding but not at recall. Given that titles were never varied betweenstudy and test phases, similar effects were not expected in the title cuecondition, floor effects notwithstanding.

With regards to metamemory, study/test manipulations had noeffect on magnitude in either the melody or the title cue conditions,χ2(3, N=64)=3.07, and χ2(3, N=64)=3.54, respectively. Itappeared that participants felt equally confident about predictingfuture recognition of songs and instrumental pieces, and the previousfindings of higher FOK ratings with melody cues for instrumentalpieces than for songs were not replicated. Thus, whereas the lyricsappeared to provide additional information for recall, when recallfailed there was no measurable degree of additional informationprovided by the lyrics for FOK ratings. This may have been because ofthe episodic nature of the task in that the lyrics and melodies had nothad a chance to become enmeshed yet, or because of the equatedsalience of the titles between the two types of music (as well as theinstructions to pay attention to these titles during the study phase).Finally, the accuracy of the FOK ratings, asmeasured by gamma scores,were also unaffected by the study/test manipulations in either themelody cue or title cue conditions, χ2(3, N=62)=4.01, and χ2(3,N=61)=2.00, respectively. However, it should be noted that Mann–Whitney tests revealed that only the S/I group's scores were abovechance with both melody and title cues; thus, the lack of a differencein accuracy may have simply reflected a failure to predict futureperformance under these conditions in general.

Experiment 5

Although kept constant, overall familiarity in Experiment 4 wasquite low. In this experiment, we used multiple repetitions for half ofthe items during study to increase their familiarity. Also, because thepurpose of this experiment was to address familiarity in particular,and because the existence of lyrics had had no effect on FOK ratings inExperiment 4, we used only the melodies.

Method

Participants were 20 American University students who receivedextra credit in psychology classes. None had participated in any of theprevious experiments.

The method was identical to that of Experiment 4 except that onlyinstrumental excerpts were used, and, for each list of 16 pieces, halfwere randomly assigned to the familiar condition and were presented5 separate times during the study phase, whereas those in theunfamiliar condition were presented only once. Repetitions for each

andmetamemory performance for instrumental pieces (I) and songs (S) as a function ofs).

FOK accuracy

S/I Mean I/I S/S I/S S/I Mean

2.50(0.59)

2.69 0.16(0.59)

−0.01(0.42)

0.06(0.52)

0.30(0.35)

0.13

2.59(0.53)

2.71 0.32(0.33)

0.30(0.46)

0.17(0.44)

0.26(0.30)

0.26

2.55 0.24 0.15 0.12 0.28

Page 8: Feeling-of-knowing for songs and instrumental music

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 2 3 4 5FOK

Pro

po

rtio

n R

eco

gn

ized

familiar melody cue

unfamiliar melody cue

familiar title cue

unfamiliar title cue

Fig. 2. Calibration curves for Experiment 5.

81B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

item in the familiar condition were arranged such that a secondpresentation immediately followed the initial presentation, and thenthree more presentations were randomly distributed across theremainder of the list amongst the presentations of the once-presenteditems as well as other repeated items.

Results

The results are summarized in Table 6. Because the level ofvariance in the title cue condition was similar to that inExperiment 4, the two cue types were examined separately, andanalyses were done with Friedman's tests. Participants recalledmore of the familiar pieces compared to unfamiliar ones in themelody cue condition, χ2(1)=11.84, p=0.001, but not in the title cuecondition, χ2(1)=2.00, p=0.16, the latter finding likely reflecting afloor effect with title cues. It appeared that the generation of a novelmelody was a very difficult task, and the degree of familiarity inducedthrough repetition was not enough to offset this difficulty.

More interestingly, however, there was an effect of familiarity onFOK magnitude with participants providing higher mean FOK ratingsfor familiar pieces in both the melody, χ2(1)=16.20, p=0.000, andtitle cue conditions, χ2(1)=20.00, p=0.000. Greater familiarity witha piece led to higher FOK ratings in this episodic task, evenwhen recallitself was very low (in the case of title cues). Greater familiarity alsoled to better accuracy in both the melody, χ2(1)=9.00, pb0.01, andtitle cue conditions, χ2(1)=4.00, pb0.05. In addition, calibrationcurves (Fig. 2) revealed that participants were less confident withmelody cues, but only for the lower FOK ratings. A similar pattern didnot occur with title cues, in that calibration was not markedly worsefor the lower FOK ratings. Thus, it appeared that when participants feltthe target was unknown, melody cues provided participants withmore information even if they were unaware of that fact. As shown byWilcoxon signed rank tests, accuracy was at chance level withunfamiliar melody cues, z=−1.55, pN0.10 but above chance level inall other cases, zsN2.05, psb0.05.

Experiment 6

In Experiment 6, we turned our focus back to the semanticdomain and tested the effects of both lyric presence andfamiliarity within the same experiment. We equated the familiar-ity between songs and instrumental pieces from the same genre ofmusic (TV themes), and also had equal numbers of highly familiarand less familiar items in both cases.

Method

Participants were 20 American University students who receivedextra credit in psychology classes. None had participated in any of theprevious experiments.

The method was identical to that of Experiment 1 except for thematerials. A group of 120 television show themes, 60 of which wereoriginally songs and 60 instrumental pieces, were piloted in order to

Table 6Mean percentage recalled correctly, mean FOK ratings (1–5), and accuracy scores in memofunction of type of cue in Experiment 5 (SDs are in parentheses).

Recall FOK ratings

Familiar Unfamiliar Mean Familiar

Cue typeMelody 63 (23) 55 (13) 59 3.87 (1.00)Title 21 (3) 0 (0) 11 3.72 (0.89)Overall 42 28 3.80

determine levels of familiarity. A total of 24 familiar and 24 unfamiliarpieces were selected for each music type, thus generating 48 songthemes and 48 instrumental themes. List 1 comprised a randomlyselected half (12) of each of the familiar song themes, unfamiliar songthemes, familiar instrumental themes, and unfamiliar instrumentalthemes, all presented in a mixed fashion. List 2 comprised theremaining halves.

Results

The results are summarized in Table 7. A 2×2 repeated measuresANOVA showed that melody cues led to better recall, F(1,19)=45.56,MSE=3.03, and songs were remembered better than instrumentalpieces, F(1,19)=13.53, MSE=3.66. There was no cue type by musictype interaction, F(1,19)=2.76, MSE=5.53.

Although melodies were harder to recall/reproduce than titles, inmetamemory judgments, when equated for familiarity, cue type hadno effect on FOKmagnitude, F(1,19)=2.08, MSE=0.34. There was aneffect of music type, however, F(1,19)=5.16, MSE=0.08; thus,despite the equated familiarity, those pieces that originally had lyricsevoked more intense FOKs. There was no cue type by music typeinteraction, F(1,19)=0.28, MSE=0.11.

Title cues led to higher accuracy, F(1,19)=15.45, MSE=26.21,mirroring the results found in previous research (cf. Peynircioğlu etal., 1998) as well as in Experiment 1. The key point was that this resultheld even when the familiarity of the two types of music wascontrolled for. There was also a trend towards higher accuracy withinstrumental pieces, F(1,19)=4.02, MSE=14.38, p=0.06. There wasno cue type by music type interaction, F(1,19)=0.00, MSE=14.05.

ry and metamemory performance for familiar and unfamiliar instrumental pieces as a

FOK accuracy

Unfamiliar Mean Familiar Unfamiliar Mean

2.47 (0.82) 3.17 0.68 (1.63) −1.16 (2.24) −0.242.38 (0.81) 3.05 2.58 (2.86) 1.89 (2.92) 2.244.85 1.63 0.37

Page 9: Feeling-of-knowing for songs and instrumental music

Table 7Mean percentage recalled correctly, mean FOK ratings (1–5), and accuracy scores inmemory and metamemory performance for television theme songs (TVS) andinstrumental television themes (TVI) as a function of type of cue in Experiment 6(SDs are in parentheses).

Recall FOK ratings FOK accuracy

TVS TVI Mean TVS TVI Mean TVS TVI Mean

Cue typeMelody 21

(12)19(9)

20 2.50(0.76)

2.40(0.71)

2.45 4.70(7.61)

6.35(6.66)

5.53

Title 14(11)

4(7)

9 2.35(0.64)

2.17(0.74)

2.26 9.15(6.61)

10.90(7.51)

10.03

Overall 18 11 2.43 2.29 6.93 8.63

82 B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

Experiment 7

In Experiment 7, all songs were taken from current popular music,and all instrumental pieces were taken from classical music. Therewere also two distinct groups of participants who differed in theirfamiliarity with each of the two music types. To the extent that lyricsinfluenced the previous observed differences more than familiarity,we expected to replicate the previous results. Otherwise, we expectedto replicate the previous results only with the participant group thatwas more familiar with songs, and, more importantly, we expected areversal of the previous results with the participant group that wasmore familiar with instrumental pieces.

Method

Participants were 40 American University students who receivedextra credit in psychology classes. None had participated inExperiments 1–6. Based on self-report, participants were assignedeither to the pop music or the classical music familiarity group until20 participants in each groupwere attained. Themethodwas identicalto that of Experiment 1 except for the materials, which were selectedafter being piloted anew for high familiarity and presented all in thepiano setting in MIDI.

Results

The results are summarized in Table 8. A mixed design 2 (cuetype)×2 (music type)×2 (familiarity as a between participantsfactor) repeated measures ANOVA showed a main effect of cue type,F(1,38)=5.88, MSE=0.14, a main effect of music type, F(1,38)=12.64, MSE=0.14, aswell as an interaction between the two, F(1,38)=5.89,MSE=0.08. Overall, recall was higher for pop songs in the title cuecondition, t(39)=5.66, but not in the melody cue condition, t(39)=0.04.With the pop songs, title cues led to higher rates of recall comparedto melody cues, t(39)=6.45, whereas an opposite trend emerged withclassical music, t(39)=2.33.

More interesting were the results that also took familiarity intoconsideration. Although there was no interaction between cue type

Table 8Mean percentage recalled correctly, mean FOK ratings (1–5), and accuracy scores in memoryas a function of participants' familiarity and type of cue in Experiment 7 (SDs are in parent

Recall FOK ratings

Group Classical Pop Classical P

Piece type Classical Pop Classical Pop Classical Pop C

Cue typeMelody 22 (16) 15 (11) 5 (5) 11 (14) 3.20 (0.61) 2.45 (0.60) 2Title 17 (15) 33 (25) 2 (2) 28 (16) 2.52 (0.95) 2.20 (0.59) 1Overall 20 24 4 20 2.86 2.33 1

and familiarity, F(1,38)=0.15, MSE=0.14, there was indeed aninteraction between music type and familiarity, F(1,38)=7.21,MSE=0.14. The pop group correctly recalled more pop songs thanclassical pieces in both the title and themelody cue conditions, ts(19)=5.57 and 2.96, respectively. The classical group recalledmarginallymoreclassical than pop pieces with melody cues, t(19)=1.84, although likethose in the pop group, they also recalledmore pop than classical pieceswith title cues, t(19)=2.86. Perhaps, because classical group partici-pants had greater exposure to popular music than pop group hadexposure to classicalmusic, theymirrored thepop groupwith respect torecall with title cues; however, it was to a lesser degree, and the resultswith melody cues supports the assumption that the groups differed infamiliarity for the two types of music.

FOK magnitude results also showed a main effect of cue type,F(1,38)=14.47, MSE=0.39, a main effect of music type, F(1,38)=4.97, MSE=0.54, as well as an interaction between the two, F(1,38)=14.28, MSE=0.29. But more importantly, there was an interactionbetweenmusic type and familiarity, F(1,38)=46.18,MSE=0.54. Thosein the classical group gave higher FOK ratings for classical than poppieces in themelody cue condition, t(19)=3.36, but not in the title cuecondition, t(19)=1.61. Those in the pop group gave marginally higherFOK ratings for pop than classical pieces in the melody cue condition,t(19)=3.51, as well as in the title cue condition, t(19)=7.22. Therewas no cue type by familiarity interaction, F(1,38)=1.50, MSE=0.39.

There was no effect of cue type or music type in FOK accuracy,F(1,38)=1.96, MSE=20.90, and, F(1,38)=0.00, MSE=59.75,respectively. However, an interaction between cue type and musictype approached significance, F(1,38)=3.44, MSE=22.37; accuracywas marginally higher with title cues compared to melody cues forclassical music, t(39)=1.98, but not with pop music, t(39)=0.46.More interestingly, there was again an interaction between music typeand familiarity, F(1,38)=24.40, MSE=59.75. With classical music,classical group participants were no more accurate than pop groupparticipantswhengivenmelody cues, t(19)=1.55, andweremarginallyless accurate than pop group participants when given title cues, t(19)=2.73. With pop songs this situation reversed—classical group partici-pants were more accurate than pop group participants in both themelody, and title cue conditions, ts(19)=3.81 and 3.31, respectively.In fact, the accuracy of the classical group was not even above chancelevel with classical melody cues, t(19)=0.07, and a similar findingwas observed with the pop group and pop title cues, t(19)=2.00, eventhough both groups were accurate above chance level in all otherconditions, ts(19)N3.63, for the classical group and ts(19)N2.59, forthe pop group. Thus, as shown in Fig. 3, these results appeared to bedue to inflated confidence levels for judgments in the more familiarareas. There was no cue type by familiarity interaction, F(1,38)=0.55,MSE=20.90 or a 3-way interaction between cue type, music type, andfamiliarity, F(1,38)=0.38, MSE=22.37.

Discussion

Peynircioğlu et al. (1998) had found that metamemory for linksbetween musical pieces and their titles were influenced by whether

and metamemory performance for classical instrumental pieces and current pop songsheses).

FOK accuracy

op Classical Pop

lassical Pop Classical Pop Classical Pop

.34 (0.70) 2.82 (0.58) −0.50 (6.05) 7.56 (6.88) 8.00 (6.35) 4.11 (4.33)

.55 (0.72) 3.04 (0.83) 3.50 (4.27) 7.28 (7.94) 9.56 (6.29) 3.06 (6.23)

.95 2.93 1.50 7.42 8.78 3.59

Page 10: Feeling-of-knowing for songs and instrumental music

Fig. 3. Calibration curves for Experiment 7.

83B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

the pieces were songs or instrumental music. Whereas stronger FOKsarose for titles with melody cues than vice versa in instrumentalmusic, this was not always the case in songs. Also, the accuracy for thestronger FOKs was worse in songs but somewhat better ininstrumental music. The present study ruled out some of the possibleartifactual explanations for this asymmetry. Results were replicatedevenwhen songs and instrumentalmusicwere presented inmixed lists,thusweakening strategydifferences as anexplanation. Instrumentation,melodic complexity, or genre differences, while influencing recall tosome degree, also did not appear to contribute to the observedmetamemory differences.

Two other possible explanations were mediation by the lyrics inthe case of songs and the likelihood of greater familiarity for songs.Research supporting separate representations for melodies and verbalaspects of music notwithstanding (cf. Peretz et al., 2004), semanticand episodic memory for both melodies and titles were unequivocallyhelped by lyrics of songs, even when they were not present. Theinfluence of lyrics on FOK ratings and accuracy were less obvious andappeared to be confined to semantic metamemory judgments. Whenequated for familiarity, songs evoked more intense FOKs than didinstrumental music (Experiment 6). But melody and title cues did notact differentially as a function of whether the items were songs orinstrumental music. That is, even though the existence of lyrics didadd to the familiarity dimension and enhanced FOK ratings in general,it did not appear to influence the observed asymmetries as a functionof whether a melody cue or a title cue had been presented. It alsoappeared that when songs were newly learned, although lyrics aidedmemory by adding another dimension for recall, they were notenmeshed enough with the melody yet to affect FOK judgments foreither titles or melodies (Experiment 4).

The role of familiarity was more convincing as an explanation inproducing the asymmetries in melody and title metamemoryjudgments. In the episodic domain, higher levels of familiarityincreased FOK ratings, and although there were no differences as afunction of cue type on the magnitude of these ratings, accuracy washigher with title cues, suggesting that familiarity had inflated the FOKratings in the case of melody cues (Experiment 5). In the semanticdomain, when equated for familiarity, the observed differences

between songs and instrumental pieces as a function of cuetype disappeared, both in terms of magnitude and accuracy(Experiment 6). Finally, when different groups of participants weretested, what emerged as the deciding factor in whether the effect ofcue type would emerge as a function of music type was not whetherthe music had lyrics or not but rather its relative familiarity to theparticipants in a particular group, which again inflated the confidencelevels (Experiment 7).

The Interactive Theory of how FOKs arise combines both thefeelings obtained from the familiarity of the cues themselves (the cue-familiarity strategy, e.g., Reder, 1987) and the efforts at trying toevoke as much of the target as possible (the target accessibilitystrategy, e.g., Hart, 1965; Koriat, 1993). According to this theory, bothcue familiarity and target accessibility play a role in the creation of ametamemory judgment, with cue familiarity generally occurringearlier in the process and target accessibility becoming involvedafterwards if the familiarity of the cue is above a particular thresholdlevel (e.g., Koriat and Levy-Sadot, 2001).

In the present case, it appears that the familiarity participants hadwith the presented music played a role earlier in the metamemoryprocess, while a direct influence of unpresented lyrics, which wereelements of the target, occurred later in the process and only whenthe cue was sufficiently familiar, although very high familiarity alsoprompted the participants to bypass a target search phase. Thus, theinformationmore crucial to the determination of themagnitude of theFOKs was the initial familiarity of either the title or the melody thatthe participants heard. When low or very high familiarity was evokedby the cues, the FOK judgments were made relatively earlier forboth types of music. When high but not overly high feelings offamiliarity were evoked, however, the judgments also includedinformation after target access attempts, and, in the case of songsthere was slightly more information because of the lyrics. Withinstrumental music, melodies were more familiar than titles andhence resulted in higher FOK ratings when given as cues; this wasnot always the case in songs, however, especially when partici-pants were more familiar with this style of music. In addition,because titles of instrumental music were less familiar than thoseof songs, there were fewer target access attempts in the former

Page 11: Feeling-of-knowing for songs and instrumental music

84 B.E. Rabinovitz, Z.F. Peynircioğlu / Acta Psychologica 138 (2011) 74–84

case, and it has been shown that when people report relying on atarget accessibility strategy, FOK ratings tend to be higheralthough not more accurate (Hosey et al., 2009).

In sum, we addressed metamemory differences observed betweeninstrumental music and songs with lyrics. The main asymmetry inmetamemory judgments shown in the present experiments as well asthose in the Peynircioğlu et al. (1998) study, that melody cues lead tostronger FOKs for titles than vice versa in instrumental music but notin songs, and the accuracy for the stronger FOKs is worse in songs butsomewhat better in instrumental music, appears to be largely afunction of the participants' familiarity with the cues when memoryfails. The role of lyrics appears to be confined to adding morefamiliarity to the cues as well as acting as additional information if atarget accessibility strategy is attempted.

References

Gruneberg, M. M., & Monks, J. (1974). “Feeling of knowing” and cued recall. ActaPsychologica, 38, 257–265.

Hart, J. (1965). Memory and the feeling-of-knowing experience. Journal of EducationalPsychology, 56, 208–216.

Hiraoka, I., & Umemoto, T. (1981). The effect of titles on the memory for music.Psychologia: an International Journal of Psychology in the Orient, 24, 228–234.

Hosey, L., Peynircioğlu, Z. F., & Rabinovitz, B. (2009). Feeling of knowing for names inresponse to faces. Acta Psychologica, 130, 214–224.

Hyman, I. E., & Rubin, D. C. (1990). Memorabeatlia: A naturalistic study of long-termmemory. Memory & Cognition, 18, 205–214.

Korenman, L. M., & Peynircioğlu, Z. F. (2004). The role of familiarity in episodic memoryand metamemory for music. Journal of Experimental Psychology: Learning, Memory,& Cognition, 30, 917–922.

Koriat, A. (1993). How do we know that we know? The accessibility model of thefeeling of knowing. Psychological Review, 100, 609–639.

Koriat, A. (1995). Dissociating knowing and the feeling of knowing: Further evidence forthe accessibility model. Journal of Experimental Psychology: General, 124, 311–333.

Koriat, A., & Levy-Sadot, R. (2001). The combined contributions of the cue-familiarityand accessibility heuristics to feelings of knowing. Journal of ExperimentalPsychology: Learning, Memory, & Cognition, 27, 34–53.

Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: Thestate of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgmentunder uncertainty: Heuristics and biases (pp. 306–334). New York: Cambridge:University Press.

Metcalfe, J., Schwartz, B. L., & Joaquim, S. G. (1993). The cue familiarity heuristic inmetacognition. Journal of Experimental Psychology: Learning, Memory, & Cognition,19, 851–861.

Nelson, T. O., Leonesio, R. J., Shimamura, A. P., Landwehr, R. F., & Narens, L. (1982).Overlearning and the feeling of knowing. Journal of Experimental Psychology:Learning, Memory and Cognition, 8, 279–288.

Otani, H., & Hodge, M. H. (1991). Mechanisms of feeling of knowing: The role ofelaboration and familiarity. The Psychological Record, 41, 523–535.

Peretz, I., Gagnon, L., Hebert, S., & Macoir, J. (2004). Singing in the brain: Insights fromcognitive neuropsychology. Music Perception, 21, 373–390.

Peynircioğlu, Z. F. (1995). Covert rehearsal of tones. Journal of Experimental Psychology:Learning, Memory, & Cognition, 21, 185–192.

Peynircioğlu, Z. F., Rabinovitz, B., & Thompson, J. (2008). Memory and metamemory forsongs: The relative effectiveness of titles, lyrics, andmelodies as cues for each other.Psychology of Music, 36, 47–61.

Peynircioğlu, Z. F., & Tekcan, A. I. (2000). Feeling of knowing for translations of words.Journal of Memory & Language, 43, 135–148.

Peynircioğlu, Z. F., Tekcan, A. I., Wagner, J. L., Baxter, T. L., & Shaffer, S. D. (1998). Name orhum that tune: Feeling of knowing for music.Memory & Cognition, 26, 1131–1137.

Reder, L.M. (1987). Strategy selection in question answering. Cognitive Psychology, 19, 90–138.Rubin, D. C. (1977). Very long-term memory for prose and verse. Journal of Verbal

Learning and Verbal Behavior, 16, 611–621.Serafine, M. L., Crowder, R. G., & Repp, B. H. (1984). Integration of melody and text in

memory for songs. Cognition, 16, 285–303.Tulving, E. (1972). Episodic and semantic memory. In E. Tulving, &W. Donaldson (Eds.),

Organization of memory (pp. 381–403). New York: Academic Press.Tulving, E. (1983). Elements of episodic memory. Oxford: Clarendon Press.Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in

episodic memory. Psychological Review, 80, 359–380.Wang, A. Y. (1990). The metamemory–memory connection: Further evidence. Journal

of Human Behavior & Learning, 7, 14–18.