Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Auditory Perception and Executive Functions in Simultaneous Interpreters
by
Tsz Man Chan
A thesis submitted in conformity with the requirements for the degree of Master of Arts
Department of Psychology University of Toronto
© Copyright by Tsz Man Chan 2015
ii
Auditory Perception and Executive Functions in Simultaneous
Interpreters
Tsz Man Chan
Master of Arts
Department of Psychology
University of Toronto
2015
Abstract
Music and language have been proposed to share processing resources in the brain, an extension
of that being expertise in one of these two skills may enhance processing of the other. This study
aimed to use an experience-dependent linguistic expert – simultaneous interpreters – to address
whether linguistic expertise enhances auditory processing outside of the linguistic domain,
particularly in measures where musical expertise has been shown to improve performance.
Simultaneous interpreters and non-interpreter controls were compared on several measures,
spanning fine temporal and spectral discrimination, speech-in-noise perception, memory for
pitch, visual working memory and cognitive flexibility. No significant differences were found
between simultaneous interpreters and controls on any of the tested measures. The current
findings are discussed in light of previous studies demonstrating advantages in executive
function in simultaneous interpreters. Possible causes for the discrepancy between the existing
research and this study are also discussed.
iii
Acknowledgments
Firstly, I would like to thank Prof. Claude Alain, without whom I certainly would not be here; he
graciously took me on as a student and gave me an incredible amount of freedom to pursue my
ideas and questions, while providing me with his guidance and support. I am truly grateful for his
listening ear and helping hand, and for his investment in his students. He showed me what a clear
work-life balance is like, and gave me a model for academic rigor and work ethic.
I would like to thank the past and present members of the Alain lab, who have been wonderful
co-workers, mentors, inspiration and friends. In particular, I would like to thank Jeffrey Wong
for his help in task design and being company at my desk, and Stefanie Hutka for her mentorship
and support. Thanks to Profs. Bruce Schneider and Mark Schmuckler, my thesis committee, for
their helpful questions and comments. I would also like to thank those who participated in this
study for their kind donation of their time.
I am truly grateful for my family and friends, who have been understanding, encouraging and
even interested in my work. Each chance to communicate knowledge is a motivation. A special
thank you to the staff and community at the Newman Centre, for helping me to grow in my
understanding of both the world and the divine, and for their friendship and fellowship. Last but
certainly not least, I would like to thank God, who gave the gift of creation and the brain to
understand it. AMDG.
iv
Table of Contents
Contents
Acknowledgments .......................................................................................................................... iii
Table of Contents ........................................................................................................................... iii
List of Tables ................................................................................................................................. vi
List of Figures ............................................................................................................................... vii
List of Appendices ....................................................................................................................... viii
1 Introduction ................................................................................................................................ 1
1.1 Music and language processing: Why the link? ................................................................. 3
1.2 Functional and behavioural effects of expertise ................................................................. 4
1.2.1 Musicianship ........................................................................................................... 4
1.2.2 Bilingualism ............................................................................................................ 6
1.3 Comparing musicianship and bilingualism: Why not? ....................................................... 8
1.4 Introducing a different perspective: Simultaneous interpreters .......................................... 8
2 Methods ...................................................................................................................................... 9
2.1 Participants ........................................................................................................................ 11
2.2 Tasks ................................................................................................................................. 14
2.2.1 Sensory auditory processing ................................................................................. 14
2.2.2 Cognitive auditory processing .............................................................................. 15
2.2.3 Executive function ................................................................................................ 16
2.3 Analysis ............................................................................................................................. 17
3 Results ...................................................................................................................................... 18
3.1 Participant characteristics ................................................................................................. 18
3.2 Task performance .............................................................................................................. 18
3.2.1 Sensory auditory processing ................................................................................. 20
v
3.2.2 Cognitive auditory processing .............................................................................. 22
3.2.3 Executive function ................................................................................................ 24
3.2.4 Additional correlations .......................................................................................... 25
4 Discussion ................................................................................................................................ 27
4.1 Interpretation of null findings ........................................................................................... 27
4.2 Study limitations ............................................................................................................... 29
4.3 Future directions ............................................................................................................... 30
5 Conclusion................................................................................................................................ 32
References ..................................................................................................................................... 33
Appendix ....................................................................................................................................... 38
vi
List of Tables
Table 1: Participant characteristics
Table 2: Group statistics on task performance
Table 3: Regression on QuickSIN with listening ability as a covariate
Table 4: Testing the model of age and years of SI experience on task performance
Table 5: Correlation between performance on tasks of executive function and tasks of
sensory/cognitive auditory processing
vii
List of Figures
Figure 1: Line graph of audiogram measurements, averaged over both ears
Figure 2: Bar plot of mistuned harmonic detection thresholds
Figure 3: Bar plot of gap detection thresholds
Figure 4: Bar plot of QuickSIN performance
Figure 5: Bar plot of pitch memory performance
Figure 6: Bar plot of 1-, 2- and 3-back task accuracy
Figure 7: Bar plot of total correct responses on WCST
Figure 8: Bar plot of number of attempts needed to correctly complete six categories on WCST
viii
List of Appendices
Appendix A: Initial analysis with three groups
1
1 Introduction
Music and language are two fundamental forms of human communication (Patel, 2014), and so it
is not unreasonable to posit that they are processed similarly in the brain. While the two domains
are considered rather different from each other, they both share remarkable similarities as well.
Both music and language involve the processing of spectral and temporal information, as well as
putting together individual units to construct a larger meaning (Patel, 2008). Thus, researchers
have examined experts in the domains of music and language to compare and contrast them.
Studying shared processing resources between language and music is the first step towards cross-
domain auditory processing, or the processing of information that is outside the domain of
expertise. In the case of music and language, here considered as separate domains, an example
would be the influence of musical expertise on the processing of language. There are many
ramifications for this, one of the most important being that of rehabilitation. If one is able to train
a domain of processing through another domain, then those who have deficits in one domain may
be able to strengthen their processing by another pathway; there are already studies that examine
the relationship between music and language in individuals with developmental dyslexia (Huss,
Verney, Fosker, Mead, & Goswami, 2011) and treatment of deaf children (Rochette, Moussard,
& Bigand, 2014). A related concept is that of transfer, wherein the experience in one domain
directly confers enhancements in a separate cognitive domain, whether the separate domain be
more closely related to the original domain (near transfer) or more distantly related (far transfer;
Barnett and Ceci, 2002). Because there are several correlations seen between musical expertise
and general cognitive function, music training has been seen as a unique method of studying
cross-domain processing and transfer (Moreno & Bidelman, 2014). From the perspective of
language expertise, similar correlations have been seen in studies of bilingualism, but
bilingualism may not be an adequate linguistic correlate to musicianship (see below).
One model of language expertise and its influence on auditory function that has not been
thoroughly explored is that of simultaneous interpreters (SIs). Here, SIs are defined as
individuals who are concurrently listening to a speaker in one language and relaying it to a
listener in a second spoken language. Several further distinctions should be made:
2
1) Unlike translators, who operate in the written medium, interpreters operate in the spoken
medium;
2) Unimodal SIs (operating in spoken language only) are used as opposed to bimodal SIs,
such as sign language interpreters; and
3) The group of interest consists of simultaneous rather than consecutive interpreters, who
listen to the entirety of the speaker content before interpreting.
Despite being a group that performs a task that places demand on auditory perceptual and
cognitive systems, SIs and any potential benefits of SI experience have not been explored
extensively in the literature. Most studies have focused on the cognitive advantages that have
been seen in SIs (reviewed below) rather than any perceptual advantages, and functional
differences have been measured mostly when participants were tested on tasks of a auditory
linguistic nature, ranging from semantic judgments (Elmer, Meyer, & Jäncke, 2010) to
performing a simultaneous interpreting task (Hervais-Adelman, Moser-Mercer, Michel, &
Golestani, 2014).
In particular, there have only been a few studies assessing differences in non-verbal auditory
processes in SIs. Elmer and colleagues (2011) employed a task using a sine wave tone and a
guitar tone played concurrently, and asked SI and non-SI bilingual controls to identify which
tones changed frequency (the sine wave, the guitar tone, or both) while in an fMRI scanner.
Behavioural differences in discrimination were not observed between the SIs and controls, but
the SIs showed enhanced activation of a fronto-parietal network. In another study, Elmer and
colleagues (2014) compared performance of SIs, musicians and controls on a categorical
perception task, where participants had to categorize sounds generated from three continua:
speech-noise, music-noise and speech-music. Not only did SIs and musicians show steeper
perceptual boundaries for sounds in their domain of expertise, SIs also showed a steeper
perceptual boundary than controls along the music-noise continuum, and these enhancements in
categorical perception were accompanied by differences in ERP signals in both groups,
particularly the N400 and the P600.
SIs have been suggested as a more specific model of language expertise to explore the music-
language relationship (Asaridou & McQueen, 2013) and experience-dependent neuroplastic
3
change in the language domain (Elmer et al., 2010). The above studies are promising, but only
begin to address the possibility of domain-general auditory advantages for SIs, let alone address
cross-domain processing advantages from language to music. In order to understand the rationale
for employing SIs in the study of cross-domain auditory processing, however, it is important to
begin with an understanding of the current research – that is, of music and language processing,
and the potential relationships between them.
1.1 Music and language processing: Why the link?
Two dominant perspectives on the relationship between music and language processing have
emerged: that of modular processing and that of shared processing. The perspective of modular
processing has largely been informed by studies of individuals selectively impaired in either
linguistic or musical perception and production (see introduction of Peretz, Gagnon, Hébert, &
Macoir, 2004). These studies, taken as evidence of separate loci of processing for language and
music, have given rise to a model that proposes a processing network specific to music (Peretz &
Coltheart, 2003). While most subsequent modular proposals for music and language do not
consist of such a clear delineation of networks, others have also suggested differential processing
networks for music- and language-specific mechanisms (Zatorre & Baum, 2012).
Conversely, many researchers have also looked at the potential of music and language sharing
processing resources, based on their similarities in mode of presentation and evolutionary
significance in communication. It has been suggested that language and music share a set of
syntactical processing resources, which draw from separate representations in memory (the
shared syntactic integration resource hypothesis (SSIRH); Patel, 2003). This has been
corroborated with both behavioural and neurophysiological evidence; not only do participants
perform worse on a discrimination task when linguistic and musical violations of syntax appear
simultaneously than separately (Fedorenko, Patel, Casasanto, Winawer, & Gibson, 2009), but
neural markers of linguistic violations were reduced when presented with violations of musical
syntax and vice versa (Koelsch, Gunter, Wittfoth, & Sammler, 2005; Steinbeis & Koelsch,
2008), suggesting a competition for limited syntactic processing resources.
A prominent framework on transfer between music and language processing at the neural level is
the OPERA hypothesis by Patel (2011; 2014). This hypothesis states that for music training to
4
drive adaptive plasticity in the speech-processing networks, five criteria must be met by the
training:
1) Overlap: the training must activate the same neural networks as would be used in a
speech correlate;
2) Precision: the training must exert more precise demands than that of speech;
3) Emotion: the training must be associated with a strong positive emotional response;
4) Repetition: the training must be repeatedly associated with the networks activated; and
5) Attention: the training must be associated with focused attention.
Importantly, the OPERA hypothesis makes no a priori assumptions on the directionality of the
benefits conferred between the domains of music and language. While the original hypothesis
focuses on the benefits of music training on speech processing, there is also the possibility that
language training can also confer similar benefits to music processing provided that the language
training fulfills all five criteria listed (Patel, 2011). In addition, the neural networks associated
with this transfer are not limited to sensory processing but can also extend to shared higher-level
cognitive processes such as auditory attention and auditory working memory (Patel, 2014).
1.2 Functional and behavioural effects of expertise
Below is a brief overview of the existing literature in musicians and bilinguals, and potential
cognitive benefits associated with expertise in musicianship or bilingualism. In general, both
musicianship and bilingualism have been associated with not only measures of executive
function such as working memory and cognitive flexibility, but also with perceptual advantages
in audition that extend beyond the domain of expertise. These converging results lend support to
the idea that music and language share common processing resources that can be trained and
accessed in a domain-general context.
1.2.1 Musicianship
Musicians have been studied extensively in terms of cognitive enhancements outside of the
domain of music. In studies comparing musicians and non-musicians, musicians have been
5
shown to have improved auditory working memory (Pallesen et al., 2010), cognitive flexibility
(Zuk, Benjamin, Kenyon, & Gaab, 2014) and various forms of attention (Rodrigues, Loureiro, &
Caramelli, 2013). Formal music training has been associated with increases in IQ (Schellenberg,
2004; Kaviani, Mirbaha, Pournaseh, & Sagan, 2014) and inhibitory control (Moreno et al.,
2011). These studies suggest that music training has an experience-dependent effect on cognitive
function beyond the processing of music stimuli.
In addition to behavioural differences, musicians have been shown to have differences in brain
structure and functional response to stimuli. Musicians were shown to have increased grey matter
in premotor cortex compared to controls, which correlated with both age of music training onset
and performance on an auditory motor synchronization task (Bailey, Zatorre, & Penhune, 2014).
Importantly, there are also longitudinal studies that have shown an effect of music training in
altering brain structure as well as functional activity. Hyde and colleagues (2009) found that 15
months of musical training correlated with changes in brain structure between two groups of
children that were not different before the beginning of training, and that training was correlated
with performance on auditory and motor tasks. Another study found functional changes as
measured by magnetoencephalography in response to a violin stimulus over the course of one
year of violin lessons, which was not observed in response to a noise stimulus; in addition, the
change was specifically in response to the violin stimulus as opposed to the untrained control
group, which did not exhibit this specificity (Fujioka, Ross, Kakigi, Pantev, & Trainor, 2006).
With regards to cross-domain processing, researchers have examined differences between
musicians and non-musicians in functional brain activity in response to linguistic stimuli.
Musicians were found to have enhanced subcortical activity when encoding speech stimuli from
tone languages, and encoded the pitch contours of the speech sounds with more fidelity than non-
musicians (Wong, Skoe, Russo, Dees, & Kraus, 2007). In a cross-sectional study comparing
musicians and non-musicians between the ages of 3 and 30, musicians were found to have
enhanced auditory brainstem responses to different stop consonants, even with only a few years
of training (Strait, O’Connell, Parbery-Clark, & Kraus, 2014). Musicians have also been found to
have enhanced categorical perception of speech sounds, reflected both in sharper perceptual
boundaries behaviourally and, as measured by electroencephalography, more robust encoding of
the fundamental frequency and first formant at the subcortical level as well as a larger P2
response at the cortical level (Bidelman, Weiss, Moreno, & Alain, 2014). A training study
6
showed that 12 months of music training enhanced preattentive processing of speech sounds in
children (Chobert, François, Velay, & Besson, 2014). Although there are not always behavioural
correlates of the functional changes observed, and not all studies can infer causality of music
training on this enhancement of cortical/subcortical encoding, the existing research points to an
association between the processing of auditory information in both the domains of music and
language.
1.2.2 Bilingualism
Bilingualism has long been studied for the potential cognitive benefits of being a fluent speaker
of more than one language (see Bialystok et al., 2009 for a review). Differences in executive
function between bilinguals and monolinguals have been observed from infancy and childhood
(Kovács & Mehler, 2009; Bialystok, 1999) to old age (Bialystok, Craik, Klein, & Viswanathan,
2004; Bialystok, Craik, & Ryan, 2006). Similar to musicianship, bilingualism has been linked to
enhanced aspects of attentional control (Bialystok, 1999) and cognitive flexibility (Bialystok et
al., 2006) among others. This has led to direct comparisons between musicians and bilinguals on
these measures, as well as individuals who are both bilingual and musically trained, to see if the
benefits of both are additive.
Bialystok and DePape (2009) compared musicians and bilinguals to monolinguals on two
measures of executive function, and found that both bilinguals and musicians outperformed
monolinguals on response inhibition, with no difference between them. They also found that
both groups outperformed monolinguals on an auditory Stroop task, but the advantage changed
depending on the expertise of the group; musicians performed better on the pitch conflict
condition, while bilinguals performed better on the word conflict condition of the task.
Conversely, another study examining task-switching and dual-task performance found improved
performance in musicians but not in bilinguals, and no interaction between music and language
expertise to suggest additive effects (Moradzadeh, Blumenthal, & Wiseheart, 2014). The lack of
interaction was also found in a language-specific task, alluding to a complex relationship
between music and language processing (Cooper & Wang, 2012; expanded below).
At the functional level, proficient bilinguals have also been shown to have different neural
signatures from monolinguals. Auditory brainstem responses recorded of a syllable presented in
multitalker babble were found to be enhanced in bilinguals compared to monolinguals (Krizman,
7
Marian, Shook, Skoe, & Kraus, 2012). With regards to executive function, bilingualism has also
been associated with a different network of activation for interference suppression, a form of
inhibitory control (Luk, Anderson, Craik, Grady, & Bialystok, 2010). Researchers also found
effects of short-term training on linguistic sounds. An early study by Kraus and colleagues
(1995) trained participants to discriminate between two variants of the phoneme /da/ that differed
in their second and third formants, and found both behavioural improvement in discrimination
and an increased mismatch negativity signal. Song et al. (2008) reported that short-term training
on lexical tones elicited improved encoding of the lexical tones in auditory brainstem responses
of adults.
Along the same lines as the research on musical expertise influencing linguistic sound
processing, studies have shown that specific linguistic experience can also affect processing of
musical sounds. Bidelman and colleagues (2013) found that Cantonese speakers performed at a
similar level to musicians in tasks of pitch discrimination, although their performance declined
relative to musicians for tasks requiring higher-level processing. Similarly, Marie and colleagues
(2012) found similar performance between Finnish speakers and musicians compared to non-
musicians in a measure of durational processing, echoing a previous finding that Finnish
speakers had enhanced behavioural and neural discriminability in durational processing as
measured by just noticeable difference and the mismatch negativity signal (Tervaniemi et al.,
2006).
Both the languages tested associate semantic meaning to the acoustic dimension in which the
speakers showed improvement compared to control; Cantonese as a tone language, and Finnish
as a quantity language, or a language where vowel duration can change the meaning of a word.
Thus, these effects may not extend to all bilinguals regardless of language. However, the effects
of musicianship and linguistic expertise may not be additive: Cooper and Wang (2012) tested
English-speaking musicians, Thai-speaking (tone-language) non-musicians and Thai-speaking
musicians on a Cantonese tone learning task, and found that Thai musicians did not outperform
either the English musicians or the Thai non-musicians. The authors suggested that the Thai
musicians may have had two conflicting tone systems, which affected their ability to learn a non-
native system, and that linguistic and non-linguistic abilities interact dynamically in non-native
language perception.
8
1.3 Comparing musicianship and bilingualism: Why not?
However, much of the extant research in the study of the music-language relationship has been
with regards to the influence of musicianship on the processing of linguistic sounds. There are
comparatively few studies examining the influence of linguistic expertise on the processing of
musical sounds. And despite there being studies that do suggest a bidirectionality of transfer
between the domains of language and music, it remains unclear at which levels of cognitive
processing this association occurs. A related question is whether an association between
language and music is simply a consequence of common processing resources, rather than
specific abstract concepts of another domain to build up as a consequence of this sharing of
resources, which could then be considered a sign of transfer (see Besson, Chobert, & Marie
(2011) for an elaboration of this distinction).
In addition to there being less evidence for associations from language expertise to processing of
musical sounds, there is the issue of what constitutes language expertise (reviewed in Asaridou
& McQueen, 2013). While most research into music training is done with individuals who have
undergone formal training through private or group lessons from a teacher, the studies done on
the influence of language expertise on music processing do not specify any sort of formal
training equivalent. Language acquisition, especially first language acquisition, is often done
implicitly without training or focused attention during an experience (and so would violate the
‘A’ of the OPERA hypothesis), which makes a comparison between the influence of music
expertise and language expertise on auditory processing more difficult.
Along with experience-independent mechanisms for language acquisition, experience-dependent
learning has been shown to be present in the developing infant (Saffran, Aslin, & Newport,
1996). Statistical learning, the extraction of rules based on likelihood of occurrence as observed
in nature, has been demonstrated not only for word boundaries (Saffran et al., 1996) but also for
sequences of tones (Saffran, Johnson, Aslin, & Newport, 1999). With regards to music,
unfamiliar musical grammars can also be acquired with only passive exposure, such that
individuals can recognize whether a new sequence belongs to the passively exposed grammar or
not (Loui, Wessel, & Hudson Kam, 2010; Tillman & Poulin-Charronnat, 2010). With this in
mind, perhaps a more appropriate music analog of language acquisition would be that of the
casual music listener who, despite not engaging in the production of music, is still regularly
9
exposed to music of a certain culture, similar to how a first language is acquired through
exposure. Wong and colleagues (2009) found ‘bimusicality’ in individuals regularly exposed to
Western and Indian music without production of either, not unlike bilinguals.
1.4 Introducing a different perspective: Simultaneous interpreters
Thus, in order to better assess the potential for a bidirectional relationship between music and
language processing, it is important to use a better comparison group that more closely matches
trained musicians in how they learn and practice their skills. A potential analog for musicians
could be SIs. These individuals often undergo formal training in order to practice this specific
skill, like professional musicians, which makes them a more comparable group than simply
being bilingual not only because they do have some form of training, but because the experience
is likely more standardized than that of language acquisition.
Research has shown that SIs demonstrate cognitive enhancements similar to that of musicians. A
study comparing the performance of SIs to non-SI bilinguals and monolinguals indicated that SIs
outperformed the other groups in accuracy on the Wisconsin Card Sorting Task (WCST), a
measure of task-switching ability (Yudes, Macizo, & Bajo, 2011). SIs also possess increased
working memory capacity (when measured by free recall; a review of the conflicting literature
on SIs and working memory can be found in Köpke and Signorelli, 2011), and improved
manipulation of information stored in working memory has been associated with training in
simultaneous interpreting (Macnamara & Conway, 2014). Performance in SI has also been
associated with language proficiency (Tzou, Eslami, Chen, & Vaid, 2011), which is in line with
the theory that working memory and language proficiency comprise two separate sub-skills in
simultaneous interpreting (Christoffels, de Groot, & Waldorp, 2003; Christoffels, de Groot, &
Kroll, 2006).
Functional differences have also been observed in SIs that may be correlated to their specific
expertise. SIs exhibit enhanced N400 responses compared to non-SI bilinguals when exposed to
incongruent lexical pairs both within L1 and L2 (Elmer et al., 2010), as well as when
categorizing sounds both within and outside their domain of expertise, such as speech and guitar
sounds (Elmer et al., 2014). Neuroimaging studies show that SIs recruit not only areas associated
with language processing and prefrontal regions involved in retrieval and maintenance of
semantic information (Rinne et al., 2000), but also that the fronto-parietal network is modulated
10
with language training in a more domain-general manner, such that it extends beyond linguistic
processing (Elmer, Meyer, Marrama, & Jäncke, 2011). A recent study has also implicated the
dorsal striatum in simultaneous interpreting, with the caudate nucleus and putamen taking on
roles in language ‘set selection’ and non-target inhibition respectively (Hervais-Adelman et al.,
2014).
The cognitive benefits that musicians, bilinguals and SIs demonstrate may result in improved
auditory processing through the reverse hierarchy theory (RHT; Ahissar et al., 2009). The RHT
states that lower-level perception is guided by the higher-level tasks that call for the resources to
execute the higher-order functions, and so training higher-order functions will in turn enhance
processing in lower-level networks that are called upon. Although originally used as a model for
perceptual learning, the RHT can be applied to experts; above and beyond bilinguals, SIs are
given specific training that enhances their executive functions, which could then have an
influence on their perceptual capabilities. To our knowledge, there are no previous studies that
have simultaneously assessed auditory perceptual function and executive function in SIs, making
this the first study to incorporate behavioural measures of both into the same design and allowing
for an indirect test of the RHT.
Thus, the hypothesis is that SIs, given the evidence of enhanced executive functions, will access
shared auditory processing resources and exhibit enhanced auditory processing skills that are not
limited to their auditory domain of expertise. The research in tone language speakers suggests
that there may be a limit to the shared processing in tone language speakers such that they
perform less well on musical tasks that have increasing cognitive demands (Bidelman et al.,
2013). It is possible that the lack of formal training in such groups limits the exercising of
regions of higher cognitive function, and therefore the acquisition of a tone language in this
context would confer little advantage. However, with their formal experience in addition to
knowing a second (or third, etc.) language, SIs have experience accessing the cognitive resources
necessary to manipulate their sensory auditory processing, which makes SIs an ideal population
for probing the music-language relationship as a language analog to musicians. The tasks chosen
here were based on tasks and cognitive skills that have previously shown a musician advantage;
it was thus expected that if SI trained a set of processing resources common to that used in music
processing, SIs should outperform non-SI individuals on these tasks.
11
This study had the following predictions: 1) SIs would perform better than controls at tasks of
executive function, in agreement with the existing literature; 2) SIs would perform better than
controls at sensory auditory tasks, in support of enhancement of shared sensory auditory
processing resources, according to the RHT; and 3) SIs would perform better than controls at
cognitive auditory tasks that lie within and outside of the domain of language, in support of
enhancement of shared cognitive auditory processing resources. The implication of this third
prediction is the establishment of an avenue through which to study the influence of cross-
domain processing from the perspective of language expertise. An additional prediction is that 4)
performance on the sensory and cognitive auditory tasks is correlated with performance on the
tasks of executive function. This would also be in agreement with the RHT, since it predicts that
training (and therefore enhancement) of the higher cognitive function would have training
benefits for the lower-level functions employed in its execution. Finally, the prediction that 5)
performance on all tasks used in this study is correlated with years of simultaneous interpreting
experience in the SI population would support a model of experience-dependent enhancements in
a language model of intensive training.
2 Methods
2.1 Participants
Participants were initially divided into three groups: SIs; non-SI bilinguals; and English-speaking
monolinguals. SIs (N = 17, 13 female) were included if they were professional interpreters
certified by an association; here, SIs were affiliated with either the Association of Translators
and Interpreters of Ontario (ATIO), Association internationale des interprètes de conference
(AIIC) or the Ministry of the Attorney General and the Court Interpreters Association of Ontario
(CIAO). Interpreters may have specific training in simultaneous interpreting in any field (court,
conference or health interpreting, etc.). Due to projected difficulties in recruitment of SIs, three
students in the Master of Conference Interpreting program at York University were included in
the SI group. From the original 19 interpreters, one SI was excluded due to not being affiliated
with any of the above groups, and another for incomplete data, for a final total of 17 participants
with an average of 14.24 years of simultaneous interpreting experience. All SIs rated themselves
fluent in both English and at least one other language.
12
The first control group consisted of non-SI bilinguals (N = 14, 9 female). A second control
group, consisting of English-speaking monolinguals (N = 3, 2 female) was also included in this
study to determine whether the formal training that SIs undertake has an effect on auditory
processing above and beyond that of bilingualism, since bilinguals are already known to possess
enhancements in certain executive functions (Bialystok & DePape, 2009). Non-SI bilinguals
were bilinguals (or multilinguals) without any additional interpreting expertise or training; two
members of this control group were excluded based on reporting prior interpreting experience.
However, due to recruitment difficulties, the two control groups have been collapsed into a
single control group for the purposes of this initial analysis, for a final count of 17 participants in
each group (SIs vs. controls). Additional analyses were conducted using a three-group
comparison, and are included in the appendix.
Participants from both groups were between the ages of 18 and 65, and matched in levels of age,
education and musical experience (where possible; see below for further clarification). Other
exclusion criteria include a history of brain trauma or neuropsychiatric disorder. One participant
in each group claimed impaired hearing in one ear, which was confirmed by audiogram results.
The sample size of the group was determined based on two sample size calculations using
G*Power (Faul, Erdfelder, Lang, & Buchner, 2007). Effect sizes from two papers were taken as
reference: the partial eta-squared value from the Wisconsin Card Sorting Task (ηp2 = 0.18) in
Yudes and colleagues (2011) and the Cohen’s d for the same task, then converted into Cohen’s f
(d = 0.99; f = 0.495) in Macnamara and Conway (2014). The conventional values of α = 0.05 and
(1 – β) = 0.8 were used in both calculations. Based on the sample size calculations, the larger of
the two sample sizes was taken.
All participants completed a demographic questionnaire that asked the following, in addition to
general screening for hearing deficits and education level:
Language background: Participants were asked to identify their native language(s), as
well as all languages known and level of proficiency in speaking, listening, reading and
writing, using a seven-point Likert scale (1 being very poor and 7 being native-like). In
addition, participants were asked to estimate the percentage each language is used on a
daily basis, as well as the language in which they received formal education, their current
13
preferred language of communication, the language in which they perform mental
operations and the language most commonly spoken at home. Finally, participants were
asked to report any scores of standardized tests for language proficiency.
Musical background: Participants were asked to identify any formal instruction they may
have had in music, the instrument studied, age they started with that instrument, current
proficiency and musical style studied. In addition, participants were asked to state the
number of hours of practice within the last 24 months, as well as rate their ability to read
music and notate a dictated melody on an eight-point Likert scale (0 being not at all and 7
being fluently). Participants were also asked to report any experience with music theory
or ear training, and whether they or any family members possess absolute pitch, or the
ability to name notes by ear without a reference; family members are included since
absolute pitch has been associated with genetic components (for a review see Tan,
McPherson, Peretz, Berkovic, & Wilson, 2014).
No language restrictions were placed on the other languages spoken by SIs and non-SI
bilinguals, since it was unknown what languages would be available from the SI population and
the specificity of the population of interest made it difficult to gather a large sample with too
many exclusion criteria. Despite the evidence of specific language experience having an effect
on certain components of processing (Bidelman et al., 2013; Marie et al., 2012), the aim was to
examine whether the formal training/experience of SIs was associated with enhancements in
processing above and beyond that of a specific language, which can still be assessed with a
heterogeneous group of languages. Where possible, the non-SI bilinguals were matched to SIs
for second language to minimize between-group variability.
No musical experience restrictions were placed on SIs, for the same reason as the language
restrictions. Where possible, controls were matched to SIs in level of musical experience as well
as current practice. Even though most participants had received some formal music instruction, it
was minimal (average years of training: M = 5.59) and many did not continue to practice
(number currently practicing: N = 7). However, since there has been evidence to suggest that
music training leaves neuroplastic changes even after individuals cease music training (Skoe &
Kraus, 2012), level of music training was matched as closely as possible between interpreters
and controls.
14
Participants were recruited through several avenues: 1) flyer advertising downtown, 2)
contacting associations of interpreters such as the Association of Translators and Interpreters of
Ontario and the local members of the International Association of Conference Interpreters and 3)
accessing the participant database at Baycrest. Written, informed consent was obtained from all
participants prior to the start of the test session. Participants were given the demographic
questionnaire to complete before beginning the task battery. All testing took place at Baycrest
Centre (3560 Bathurst Road, Toronto, ON, M6A 2E1).
2.2 Tasks
Participants completed the tasks below. The tasks were grouped under the three categories of 1)
sensory auditory processing, 2) cognitive auditory processing and 3) executive function. These
groupings were chosen to determine whether there is a limit to the shared resources between
language and music processing between higher and lower levels of processing, and whether this
is correlated with performance on tasks of executive function.
2.2.1 Sensory auditory processing
These tasks were selected to measure basic sensory discriminations that do not require the
additional storage or manipulation of incoming auditory information.
Pure tone threshold: This task was used as a measure of peripheral auditory processing (as in
Zendel & Alain, 2012). Pure tone thresholds were determined for every octave between 250 and
8000 Hz (250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, 8000 Hz) using an audiometer (GSI 16).
Measurements were taken for the left and right ear, determined in decibels hearing level (dB
HL). Participants were asked to respond when they heard the tone presented, with the intensity
decreased by 5 dB HL until participants no longer responded. After receiving no response, the
intensity was increased by 5 dB HL and recorded as the threshold. A grand average measurement
was calculated by averaging the thresholds for each frequency across both ears into a single
value, which was then used in the statistical analysis.
Mistuned harmonic detection: This task was used as a measure of fine spectral discrimination.
The stimuli were harmonic complexes of 12 pure tones, with a fundamental frequency of 200
Hz. The second harmonic (at 600 Hz) was mistuned beginning at 696 Hz. Participants were
asked to indicate which of two presented tones has a mistuned component in a forced-choice
15
paradigm. After three correct responses the amount of mistuning was reduced by 50%, while
after one incorrect response it was increased by 32%. The mistuning reduction was change to
24% after the first two reversals (changes in stimulus), and thresholds were determined by
averaging the last eight reversals. The final measurement was an average of three blocks of
testing.
Gap detection: This task was used as a measure of fine temporal discrimination (as in Zendel &
Alain, 2012). The stimuli were two tones (1000 Hz) presented with a small gap in between, and a
comparison stimulus of one continuous tone. Participants were presented the stimuli sequentially
and asked to respond in a forced-choice paradigm. After three correct responses, the gap size was
reduced first by 8 ms and subsequently by 50% of the gap; after one incorrect response the gap
size increased first by 8 ms and subsequently by 50%. There were three testing blocks, each
lasting until there were twelve reversals, with the threshold being an average of the last eight
reversals. The final measurement was an average of the three blocks, and the reported score is
the gap threshold in milliseconds.
2.2.2 Cognitive auditory processing
These tasks were selected to measure auditory processing that involves the manipulation of the
incoming stimuli, either in a within-domain context or a cross-domain context.
Speech-in-noise: The ability to process speech in noise was measured using the QuickSIN test
(Speech-In-Noise; Version 1.3). Participants were presented four sets of six spoken sentences
embedded in talker babble, and asked to repeat the sentence back to the experimenter. Each
sentence in a set was presented at a different signal-to-noise ratio (SNR), with the SNR
decreasing between each sentence. Each sentence contained five keywords, and a total score out
of 30 was tallied for each set, which was then subtracted from 25.5 to determine the signal-to-
noise loss. This score represents the signal-to-noise ratio necessary for identifying 50% of the
keywords presented in a set. The results of all the sets was averaged into the value used in the
analysis. This task was used as a measure of within-domain higher-level processing, since it
involves not only parsing out the relevant information from noise but also accessing an
individual’s lexical knowledge in order to respond with the correct word.
16
Pitch memory: This task was originally used by Bidelman and colleagues (2013) as a measure of
short-term memory for pitch. A sequence of four tones (each 350 ms in duration) was presented,
followed by a 1.5 s silent period. A probe tone was then presented, and participants were asked
to determine as quickly as possible whether the tone presented was in the previous sequence.
Participants were given feedback on whether they answered correctly. To avoid tone labeling and
familiar tonal sequences such as a Western musical scale, sequences were generated randomly to
achieve tonal ambiguity. Each testing block contained 50 trials, of which half have a probe tone
that did not occur in the sequence. Scores for the task were calculated as the d’, or the difference
between the Z-transformed scores for hits and false alarms. This test was done four times (a total
of approximately 30 minutes) and the final d’ score was an average of all blocks. This task was
used as a measure of cross-domain higher-level processing, since it involves memory processes
for a task of audition that falls outside the domain of language.
2.2.3 Executive function
These tasks were selected to measure aspects of cognitive processing common across sensory
domains, and so the visual versions of the tasks were selected to maximize the difference
between these tasks and the cognitive auditory tasks.
N-back working memory: This task measured both the ability to store visual information in
short-term memory while constantly updating the storage with new incoming information, and
being able to compare it to what is currently presented. Numeric digits were presented on a
screen, with an inter-stimulus interval of 500 ms and a presentation duration of 500 ms.
Participants were asked to respond to a stimulus when it matches the stimulus presented two
stimuli previous, for a two-back task; the stimulus that has to match changes depending on
whether the run is a one-back, two-back or three-back task. Each run of the task consisted of 124
stimuli with 24 targets. Two runs were conducted on each condition (one-, two- or three-back)
and accuracy (calculated as hits / total targets) was averaged over both runs.
Wisconsin Card Sorting Test (WCST): This task measures mental flexibility in individuals
through inference of rules and switching between rules. Participants were given a deck of cards,
each with a certain number of shapes in a certain colour, and were asked to sort them by a certain
rule; they were not told what sorting rule to use, only whether their responses were correct or
not. After ten correct responses, the sorting rule was changed, but without informing the
17
participant. The participant must then infer what the new rule was based on the information on
whether the response was correct. The task was completed when either the participant had
successfully sorted the cards in each dimension of shape, colour and number, or the maximum
number of cards had been reached. The version used was the complete WCST, with all 128 cards
from 2 decks. Participants were recorded on the number of categories completed, total number of
cards correctly sorted, and the number of cards sorted incorrectly in several types of errors.
2.3 Analysis
Statistical analyses were run in the statistical package R (R Core Team, 2015). One-way analysis
of variance (ANOVA) tests were conducted for each of the seven tasks performed, with Group
(SI or Control) being the between-subjects factor. For the n-back working memory task, a 2 x 3
ANOVA [Group (SI vs. control) x Condition (1-, 2-, 3-back)] was conducted. All analyses were
conducted with an a priori alpha level of α = 0.05. Effect sizes are reported in the form of
Cohen’s d and were calculated with Becker’s Effect Size Calculator
(http://www.uccs.edu/~lbecker/).
Because the study has not yet met its original sample size, it is statistically underpowered, and
these results should be interpreted with that in mind. Finally, the tests were conducted with the
assumption of normal distributions, but the low number of participants render it unlikely. Hence,
the results should be interpreted with caution.
18
3 Results
3.1 Participant characteristics
Because both the interpreter and control populations were heterogeneous in age and years of
musical training, two factors known to influence performance on at least several of the tasks used
in this study (mistuned harmonic, gap detection, QuickSIN: Zendel & Alain, 2012; WCST,
Rhodes, 2004), t-tests were conducted to ensure there were no significant differences. No
significant differences were found between the two groups in any of these measures (age: t(32) =
0.850, p = 0.402); music: t(32) = 1.063, p = 0.296; education: t(32) = 0.622, p = 0.538). None of
the participants reported that they had absolute pitch. The means and standard deviations of these
variables in each group are reported in Table 1.
Simultaneous Interpreters Controls
Age 47.24 (12.29) 43.53 (13.13)
Years of Education 19.76 (4.25) 19 (2.76)
Years of Music Instruction 7.06 (9.85) 4.06 (6.20)
Years of SI 14.24 (9.79) -
English Listening Ability 6.65 (0.49) 6.24 (0.75)
Table 1: Participant characteristics. Means and standard deviations (in brackets) are reported.
3.2 Task performance
A summary of the means and standard deviations can be found in Table 2.
19
Task SI group Control group
Mean SD 95% CI Mean SD 95% CI p Cohen’s d
Pure-tone threshold 14.53 9.09 9.86 –
19.21
16.65 11.96 10.50 –
22.80
0.5653 0.20
250Hz 18.67 12.06 12.48 –
24.88
16.47 8.39 12.16 –
20.78
0.5402 0.21
500Hz 17.65 9.82 12.60 –
22.70
16.44 9.33 11.64 –
21.24
0.716 0.13
1000Hz 12.35 9.94 7.24 –
17.46
15.41 10.80 9.86 –
20.96
0.3966 0.29
2000Hz 9.41 5.63 6.52 –
12.30
17.02 12.17 10.77 –
23.29
0.0255 0.80
4000Hz 11.02 11.99 4.86 –
17.19
12.65 16.40 4.21 –
21.08
0.7449 0.11
8000Hz 18.09 15.14 10.30 –
25.87
21.91 22.19 10.50 –
33.32
0.5615 0.20
Mistuned harmonic 4.68 3.66 2.80 –
6.56
6.68 3.80 4.73 –
8.63
0.1283 0.54
Gap detection 1.81 0.77 1.42 –
2.21
1.39 1.13 0.80 –
1.97
0.2046 0.43
QuickSIN 1.35 1.35 0.66 –
2.05
2.78 2.43 1.53 –
4.03
0.0419 0.73
Pitch memory 0.94 0.65 0.61 –
1.28
1.24 0.69 0.88 –
1.59
0.2064 0.45
N-back accuracy
1-back 0.98 0.03 0.96 –
0.99
0.96 0.05 0.94 –
0.99
0.5845 0.49
2-back 0.67 0.14 0.60 –
0.74
0.65 0.11 0.59 –
0.71
0.7916 0.16
3-back 0.37 0.17 0.29 –
0.46
0.32 0.13 0.25 –
0.38
0.4907 0.33
WCST
Categories 8.53 2.32 7.33 –
9.72
7.94 2.41 6.70 –
9.18
0.4739 0.25
Total correct 103.88 13.41 96.99 –
110.78
100.65 13.91 93.50 –
107.79
0.4949 0.24
No. of attempts to
6 categories
87.35 20.17 76.98 –
97.72
94.59 26.39 81.01 –
108.16
0.3759 0.31
No. of errors
before 6
categories
18.29 15.74 10.20 –
26.39
21.76 16.49 13.28 –
30.25
0.5347 0.22
Table 2: Group statistics on task performance.
20
3.2.1 Sensory auditory processing
No significant differences were found in the average pure tone hearing threshold (F(1, 32) =
0.338, p = 0.565). A 2 x 6 ANOVA [Group (SI vs. control) x Frequency (250Hz, 500Hz,
1000Hz, 2000Hz, 4000Hz, 8000Hz)] was also conducted to determine if there were any
differences between groups at certain frequencies; there was no main effect of Group (F(1, 192)
= 1.423, p = 0.234) or Frequency (F(5, 192) = 2.034, p = 0.076) and no Group x Frequency
interaction (F(5, 192) = 0.677, p = 0.641). To determine if pure-tone thresholds were related to
performance on the other measures of sensory auditory processing, correlations were run.
Because pure-tone threshold correlated strongly with age (r(32) = 0.543, p < 0.001), age was
included as a predictor when pure-tone thresholds were correlated with the other sensory
measures. Figure 1 illustrates the average hearing thresholds at each frequency tested.
Figure 1. Line graph of audiogram measurements, averaged over both ears. Error bars represent the standard error
of the mean.
No significant differences were found in performance on the mistuned harmonic detection task
(F(1, 32) = 2.438, p = 0.128; Figure 2). The partial correlation between task performance and
pure tone threshold was calculated, with age as a covariate, at 250Hz, 500Hz, and 1000Hz. For
each frequency, the model used was:
21
Y = β0 + β1(Hearing Threshold) + β2(Age)
The score on the mistuned harmonic task was not correlated with hearing thresholds measured at
250Hz (r(31) = 0.152, p = 0.391), 500Hz (r(31) = 0.149, p = 0.402) or 1000Hz (r(31) = 0.185, p
= 0.296), suggesting that performance on this task is not affected by peripheral auditory
processing ability.
Figure 2. Bar plot of mistuned harmonic detection thresholds. Error bars represent the standard error of the mean.
No significant differences were found in performance on the gap detection task (F(1, 32) =
1.677, p = 0.205; Figure 3). However, the score on the gap detection task was positively
correlated with hearing threshold measured at 1000Hz when controlling for age (r(31) = 0.338, p
= 0.045), suggesting that peripheral hearing loss at this frequency hinders performance on the
task.
22
Figure 3. Bar plot of gap detection thresholds. Error bars represent the standard error of the mean.
3.2.2 Cognitive auditory processing
A main effect of Group was found for the QuickSIN test (F(1, 32) = 4.491, p = 0.042; Figure 4).
Because the test was conducted in English and English was not the first language of most of the
participants (only 2 in the SI group and 4 in the non-SI group claimed English as their native
language), participants’ self-rated listening ability in English was included as a covariate. When
included in the model, the effect of Group was no longer found to be significant; however, self-
rated listening ability was found to be significant (see Table 3).
R square df F-statistic
0.3375 2, 31 7.898
Variable Unstd. beta Std. Error Std. beta t-value p-value
(Intercept) 12.2953 3.0330 4.054
Group -0.7981 0.6266 -0.1963 -1.274 0.2123
Listening ability -1.5261 0.4817 -0.4882 -3.168 0.003439*
Table 3: Regression on QuickSIN with listening ability as a covariate. * p < 0.05.
23
Figure 4. Bar plot of QuickSIN performance. Error bars represent the standard error of the mean.
No significant differences were found in performance on the pitch memory task (F(1, 32) =
1.424, p = 0.242; Figure 5). A second analysis was conducted after removing 4 native tone-
language speakers from the analysis (all in the control group), since it has been found that tone-
language speakers possess an advantage in this task (Bidelman et al., 2013); again, no significant
differences were found after these participants were removed (F(1, 28) = 0.471, p = 0.498).
Figure 5. Bar plot of pitch memory performance. Error bars represent the standard error of the mean.
24
3.2.3 Executive function
The 2 x 3 ANOVA found a significant main effect of Condition (F(2, 96) = 249.720, p < 0.001)
but no effect of Group (F(1, 96) = 1.899, p = 0.171) and no interaction between Condition and
Group (F(2, 96) = 0.310, p = 0.734). Between-group comparisons on each of the three levels (1-,
2- and 3-back) were run; the values are reported in Table 2 (see Figure 6 for an illustration).
Figure 6. Bar plot of 1-, 2- and 3-back task accuracy. Error bars represent the standard error of the mean.
No significant differences were found in performance on the WCST, on either the total correct
(F(1, 32) = 0.477, p = 0.495; Figure 7) or number of categories completed (F(1, 32) = 0.525, p =
0.474). It was noted that the WCST was conducted differently than it was in Yudes et al. (2011),
in which the task ended either when the participant finished six categories, or the participant
sorted all 128 cards. Because of this, a separate analysis was conducted using measures derived
from the original data that were similar to those of the other study. Despite this, there were still
no significant differences found on the number of cards required to complete six categories (F(1,
32) = 0.807, p = 0.376; Figure 8) or the number of errors (F(1, 32) = 0.394, p = 0.535).
25
Figure 7. Bar plot of total correct responses on WCST. Error bars represent the standard error of the mean.
Figure 8. Bar plot of number of attempts needed to correctly complete six categories on WCST. Error bars represent
the standard error of the mean.
3.2.4 Additional correlations
Several other comparisons were planned for the study. For the measures below, the tasks of
executive function were each represented by one value. The n-back task was represented by
accuracy on the 2-back task, since the 1-back task hit ceiling and the 3-back task contained more
variance; the WCST was represented by the number of attempts to reach six categories, as in
Yudes et al. (2011).
26
3.2.4.1 Correlation within task groups
To assess whether the tasks were grouped into measuring similar processes, pairwise correlations
were run between tasks in the same group (with the exception of pure-tone threshold, which was
already correlated with the mistuned harmonic and gap detection tasks). None of the correlations
were significant (sensory auditory processing: r(32) = 0.274, p = 0.117; cognitive auditory
processing: r(32) = 0.070, p = 0.694; executive function: r(32) = - 0.107, p = 0.549).
3.2.4.2 Effect of years of SI experience
A correlation between years of SI experience and task performance was planned. It was expected
that years of SI experience would correlate with age, which a correlation test demonstrated (r(32)
= 0.386, p = 0.024). Thus, a model including both age and years of SI experience as factors was
tested on task performance for each of the seven tasks:
Y = β0 + β1(Age) + β2(Years of SI experience)
The results are summarized in Table 4. The model was significant only for the pure-tone
threshold (R2 = 0.3062, p = 0.003), although this was driven by the effect of age (p = 0.001) and
not by years of SI experience (p = 0.488). Because the model was not significant for any of the
other tasks, the effect of years of SI experience alone on task performance is not reported.
Task R-squared p-value
Audiogram 0.3062 0.0034*
Mistuned harmonic 0.0313 0.6111
Gap detection 0.0470 0.4739
QuickSIN 0.1033 0.1844
Pitch memory 0.0356 0.5703
2-back working memory 0.1176 0.1437
WCST 0.1050 0.1792 Table 4: Testing the model of age and years of SI experience on task performance. * p < 0.05.
3.2.4.3 Relationship between executive function tasks and auditory perceptual tasks
To test the RHT, which states that higher cognitive experiences influence perceptual attuning
(Ahissar et al., 2009), correlations were run between performance on the tasks of executive
function and performance on the sensory and cognitive auditory processing tasks. The results of
these correlations are reported in Table 5. The correlation between the gap detection task and the
27
2-back working memory task approached significance (r(32) = -0.3255, p = 0.0603), but none of
the correlations reached significance at p = 0.05.
Task Correlation with WCST Correlation with 2-back
r p-value r p-value
Mistuned harmonic 0.1345 0.4483 -0.2079 0.2381
Gap detection -0.0872 0.6238 -0.3255 0.0603
QuickSIN -0.2500 0.1539 -0.1645 0.3525
Pitch memory 0.1680 0.3423 0.1369 0.4401 Table 5: Correlation between performance on tasks of executive function and tasks of sensory/cognitive auditory
processing. All correlations reported have a df of 32.
4 Discussion
SIs did not significantly differ from controls in measures of sensory auditory processing or
executive function. A difference was found in a within-domain cognitive auditory processing
task, although the effect disappeared when taking into account self-rated English listening
ability. The null findings may be reflective of either the heterogeneity of the two groups tested, a
lack of sensitivity in the tasks selected, or a limitation on how or to what degree simultaneous
interpreting expertise/training may have an effect on auditory processing resources.
4.1 Interpretation of null findings
The lack of an effect between the groups observed in most of the tasks used in this study may
reflect a limitation of the experience conferred to SIs. In the framework of the OPERA
hypothesis, simultaneous interpreting may not fulfill one or more of the five conditions in the
same way that musicianship does. For example, simultaneous interpreting may not enhance
precision, or at least not in the same way that a musician learns to tune a musical instrument, or
to synchronize to a beat. Here, because the control group has not been separated into bilinguals
and monolinguals, even any effects conferred by bilingualism alone on task performance cannot
be parsed out.
It is of interest to note that the one task that most clearly showed a difference between groups in
performance was the QuickSIN, the task most directly linked to SIs’ expertise. While this
difference may be explained by the covariate of self-rated listening ability (more below), this
finding suggests that while there may be experience-dependent change conferred to the SIs based
on their own experience, this change appears to be limited to the domain of their expertise. This
28
domain-specific processing advantage at the behavioural level has been previously found in
literature regarding language and music expertise (Bialystok & DePape, 2009). The fact that
there appears to be no difference in perceptual capabilities between SIs and non-SIs, especially
in tasks that have shown differences in musicians and non-musicians (Zendel & Alain, 2012),
suggests that the activity of simultaneous interpreting may not be engaging these higher-order
cognitive structures such that lower-level perceptual attuning would also occur as predicted by
the RHT. This suggestion is also supported by a lack of correlation between performance on the
tasks of executive function and the sensory auditory tasks, and contrasts against findings that
correlate functional changes in SIs to domain-general executive functions that may modulate
auditory processing (Elmer et al., 2011; Hervais-Adelman et al., 2014).
No significant differences were observed on performance on the pitch memory task, which was
the task poised to examine the effects of language expertise on pitch perception, and most
directly connected to the concept of transfer as proposed by Besson and colleagues (2011).
Considering that performance on neither the WCST nor the n-back task were correlated with
performance on the pitch memory task, there is no evidence from this study that simultaneous
interpreting confers any transfer as would occur through the RHT. It is therefore likely that
perceptual benefits that extend beyond the domain of language are only present if the source
language demands that distinction (as it was in Bidelman et al., 2013 and Marie et al., 2012), and
that this is not merely a consequence of bilingualism or SI overall.
Alternatively, the null findings may simply be a consequence of the tasks used themselves, in
that they are not suited to reflect any potential changes observed in this population of interest.
Most of the tasks used in this study were chosen because they not only allowed for observations
of shared resources at multiple levels of auditory processing, but also because some benefit had
been seen in these tasks when performed by musicians (Zendel & Alain, 2012; Bidelman et al.,
2013). If indeed musicianship and language expertise act upon and hone this same set of
processing resources, one might expect similar advantages to be conferred. Therefore, the lack of
an effect of simultaneous interpreting on task performance may also be because the tasks are not
sensitive enough to the differences between the two groups used here.
The lack of correlation in performance between the tasks, as they were grouped, suggests that it
may not be helpful to discuss these tasks in this manner. In particular, the lack of correlation
29
between the QuickSIN and the pitch memory task raises the likelihood that these two tasks are
reflecting two separate higher cognitive auditory processes, or that at this level of processing
individuals are accessing two different methods stored in different networks, even if they may
share online processing (Patel, 2003; Fedorenko et al., 2009), and that it is the knowledge
systems that are refined but not the processing resources themselves. With regards to the tasks of
executive function, several models have described working memory and cognitive flexibility as
separate components (Diamond, 2013; Miyake & Friedman, 2012), which may help to describe
the lack of correlation between performance on the n-back task and WCST.
Furthermore, there are several studies that have shown advantages in neural encoding of speech
and music sounds for music and language experts that may not have reported correlates of
improved behavioural performance (Chobert et al., 2014; Tierney, Krizman, Skoe, Johnston, &
Kraus, 2013, Bidelman, Gandour, & Krishnan, 2011). Functional but not behavioural change
does not discount the possibility that music and language are accessing the same set of resources
to process incoming stimuli. Similar functional changes without matching behavioural change
have also been seen in non-verbal auditory tasks for SIs, implicating involvement of top-down
modulatory regions (Elmer et al., 2011). Because the effect sizes observed here are very small, it
is possible that they simply are not observable at a behavioural level in healthy populations.
4.2 Study limitations
Regarding the visual n-back working memory test, there is conflicting evidence in the
musicianship literature as to whether visual working memory is actually higher in musicians than
those without musical training. While some studies have cited improvements in visual working
memory (George & Coch, 2011; Amer et al., 2013), others have shown improvements in
memory only in the auditory domain (Cohen et al., 2011). The task chosen here to assess
working memory is also different from that which has typically been used in tasks that have seen
a visual working memory advantage, which used block or digit span tests.
While it is surprising that the current study could not replicate the WCST results found from
Yudes et al. (2011) and Macnamara and Conway (2014), there are several differences between
that study and this one that may be considered. Firstly, Yudes et al. (2011) compared bilinguals
and SIs where the bilinguals were uniform in their L1 and L2. On the other hand, Macnamara
and Conway (2014) used a within-subjects design and compared interpreters in training at the
30
start of their education and at the end, examining the association with training from a different
perspective. In contrast, the bilinguals and interpreters in the current sample are more
heterogeneous in their language capabilities, as well as their L1 and L2; this added variability
may be obscuring differences between these groups that would be otherwise present. Secondly,
the test was conducted differently in this study than in Yudes et al. (2011), although subsequent
analysis following the constraints of their study also found no difference in performance between
SIs and controls. Furthermore, Yudes and colleagues restricted their study to unbalanced-late
bilinguals, whereas our study contained interpreters and non-interpreter bilinguals of both
unbalanced-late and balanced bilinguals.
Time constraints made it difficult to achieve the initial design of three participant groups, with
separate English monolingual and bilingual controls. The current findings are an interpretation
based on aggregating two groups of participants which were originally intended to be separate
groups, in order to separate potential effects of simultaneous interpreting from those of
bilingualism. A conclusion based on these results would be premature, but the lack of an
observed difference between the interpreter and control group in this study calls into question the
degree to which executive functions 1) are trained in SIs and 2) are able to modulate lower-level
auditory processing in this population.
Due to the lack of a separate English control group, the difference in performance on the
QuickSIN is difficult to assess. Generally, SIs rated their own listening fluency in English higher
than non-SI bilinguals, which was likely a factor in their improved performance (as seen by the
significance of the covariate when included in the model). It may be that this self-rating of
English listening ability is reflective of an interpreter’s confidence in their job, or that it is their
better listening ability that allows them to become interpreters. Because of this, the English
monolingual group is a more suitable control, so that SIs can be compared to a group that has a
more similar level of English listening ability. Alternatively, the bilingual group can be more
closely matched in listening ability.
4.3 Future directions
Although there are currently no significant results in this study, the first look at a three-group
comparison suggests that there may be a difference between interpreters and controls when
controls are properly split into bilinguals and English-speaking monolinguals. The first step will
31
be to complete this study and re-analyze the data at that stage to examine whether there are any
between-group differences. At the same time, minimizing between-group differences by
matching the controls’ languages to the SIs as closely as possible will also help to reduce
variability.
However, even then, the study risks being underpowered. Upon re-examining the effect sizes
observed here and those in the studies originally used for reference with regards to the WCST,
the ones seen here are much smaller in comparison; for example, while Yudes and colleagues
(2011) observed a partial eta-squared of 0.18, which converts to a Cohen’s f of 0.469, the effects
of the WCST here on the same measure revealed a Cohen’s d of 0.22, which converts to a
Cohen’s f of 0.11. Achieving statistical significance with the current effect sizes in a one-way
ANOVA with three groups would require a much larger sample size. Even when using one of the
larger effect sizes seen in this study (a Cohen’s d of 0.73 from the QuickSIN, which converts to a
Cohen’s f of 0.365), it would take 26 participants in each group, for a total of 78, to reach
statistical significance at α = 0.05 (calculated with G*Power; Faul et al., 2007).
Secondly, as stated above, the tasks chosen in this study may not be suited for delineating any
experience-dependent advantages conferred to SIs. Since SIs work more in the spoken medium,
they may show an advantage in an auditory working memory task as opposed to a visual one,
similar to what musicians have demonstrated (Pallesen et al., 2010). When SIs have been
assessed for working memory ability, it has been with verbal recall (as reviewed in Köpke &
Signorelli, 2011); thus, a task such as the visual n-back may not properly examine the effects of
working memory in SIs. The WCST was originally designed to assess set shifting (Berg, 1948),
and grew to become a measure of frontal lobe function (Milner, 1963); thus, a task more specific
to switching may be more sensitive to advantages conferred to SIs. For example, Moradzadeh
and colleagues (2014) used a Quantity/Identity task, which allowed them to measure switch costs
in reaction time. Although bilinguals were not found to have an advantage on this task in the
study, it may be more receptive to differences in performance with SIs. The local switch cost in
particular, which looks at switches between tasks in mixed-task blocks, may be reduced in SIs,
who have to regularly switch between languages as part of their job.
Finally, other forms of language expertise may demonstrate more perceptual attuning as
predicted by the RHT. For example, phoneticians learn to transcribe speech sounds into an
32
international phonetic alphabet, a skill that is acquired in adulthood (see the introduction of
Golestani, Price, & Scott, 2011). Such a task, which involves focusing on the individual
components of each speech sound, would arguably prompt more perceptual fine-tuning than
simultaneous interpretation, where sounds are assessed more at the word and sentence level.
Another suggestion may be individuals who speak several languages with fine perceptual
differences in their speech sounds, although assessment of this would prove difficult.
Nevertheless, a comparison between phoneticians and interpreters could be potentially framed as
a comparison between language experts who have developed a focus on different aspects of
language (the former on specific speech sounds or a more perceptual level of processing, and the
latter on word- and sentence-level of processing), and differences on performance in perceptual
and higher cognitive tasks may highlight a dissociation in how such processing resources may be
shared between domains.
5 Conclusion
Simultaneous interpreters have been proposed previously as a training analog to musicianship in
the domain of language (Elmer et al., 2010). This study aimed to test this analog by utilizing
several tasks that correlate with previously seen benefits in auditory perception and executive
function that have been associated with musicianship, with the hypothesis that if language and
music accessed shared processing resources, expertise in a linguistic domain would confer
advantages on domain-general tasks in a manner similar to musical expertise. SIs and non-
interpreters were compared on several measures of auditory perception and executive function,
and no significant differences between them were found except on one measure, which may be
explained by another variable. This study, although incomplete, suggests that this comparison
may not be reliable. Further research will be necessary to determine to what extent language and
music share processing resources in the brain, and how different forms of expertise may or may
not train such common resources.
33
References
Ahissar, M., Nahum, M., Nelken, I., & Hochstein, S. (2009). Reverse hierarchies and sensory
learning. Philosophical Transactions of the Royal Society B, 364, 285-299.
Asaridou, S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence
for shared domain-general mechanisms. Frontiers in Psychology, 4, 321.
doi:doi:10.3389/fpsyg.2013.00321
Bailey, J. A., Zatorre, R. J., & Penhune, V. B. (2014). Early musical training is linked to gray
matter structure in the ventral premotor cortex and auditory-motor rhythm
synchronization performance. Journal of Cognitive Neuroscience, 26(4), 755-767.
Barnett, S. M., & Ceci, S. J. (2002). When and where do we apply what we learn? A taxonomy
for far transfer. Psychological Bulletin, 128(4), 612-637.
Berg, E. A. (1948). A simple objective technique for measuring flexibility in thinking. Journal of
General Psychology, 39, 15-22.
Besson, M., Chobert, J., & Marie, C. (2011). Transfer of training between music and speech:
Common processing, attention, and memory. Frontiers in Psychology, 2, 94.
doi:10.3389/fpsyg.2011.00094
Bialystok, E. (1999). Cognitive complexity and attentional control in the bilingual mind. Child
Development, 70(3), 636-644.
Bialystok, E., Craik, F. I. M., Green, D. W., & Gollan, T. H. (2009). Bilingual minds.
Psychological Science in the Public Interest, 10(3), 89-129.
Bialystok, E., Craik, F. I. M., Klein, R., & Viswanathan, M. (2004). Bilingualism, aging and
cognitive control: Evidence from the simon task. Psychology and Aging, 19(2), 290-303.
Bialystok, E., Craik, F. I. M., & Ryan, J. (2006). Executive control in a modified antisaccade
task: Effects of aging and bilingualism. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 32(6), 1341-1354.
Bialystok, E., & DePape, A. (2009). Musical expertise, bilingualism, and executive functioning.
Journal of Experimental Psychology: Human Perception and Performance, 35(2), 565-
574.
Bidelman, G. M., Gandour, J. T., & Krishnan, A. (2011). Musicians and tone-language speakers
share enhanced brainstem encoding but not perceptual benefits for musical pitch. Brain
and Cognition, 77(1), 1-10.
Bidelman, G. M., Hutka, S., & Moreno, S. (2013). Tone language speakers and musicians share
enhanced perceptual and cognitive abilities for musical pitch: Evidence for
bidirectionality between the domains of language and music. Plos One, 8(4), e60676.
doi:10.1371/journal.pone.0060676
Bidelman, G. M., Weiss, M. W., Moreno, S., & Alain, C. (2014). Coordinated plasticity in
brainstem and auditory cortex contributes to enhanced categorical speech perception in
musicians. European Journal of Neuroscience, 40, 2662-2673.
34
Chobert, J., François, C., Velay, J., & Besson, M. (2014). Twelve months of active musical
training in 8- to 10-year-old children enhances the preattentive processing of syllabic
duration and voice onset time. Cerebral Cortex, 24(4), 956-967.
Christoffels, I. K., de Groot, A. M. B., & Kroll, J. F. (2006). Memory and language skills in
simultaneous interpreters: The role of expertise and language proficiency. Journal of
Memory and Language, 54, 324-345.
Christoffels, I. K., de Groot, A. M. B., & Waldorp, L. J. (2003). Basic skills in a complex task: A
graphical model relating memory and lexical retrieval to simultaneous interpreting.
Bilingualism: Language and Cognition, 6(3), 201-211.
Cohen, M. A., Evans, K. K., Horowitz, T. S., & Wolfe, J. M. (2011). Auditory and visual
memory in musicians and nonmusicians. Psychonomic Bulletin and Review, 18(3), 586-
591.
Cooper, A., & Wang, Y. (2012). The influence of linguistic and musical experience on cantonese
word learning. Journal of the Acoustic Society of America, 131(6), 4756-4769.
Corrigall, K. A., Schellenberg, E. G., & Misura, N. M. (2013). Music training, cognition, and
personality. Frontiers in Psychology, 4, 222. doi:10.3389/fpsyg.2013.00222
Elmer, S., Klein, C., Kühnis, J., Liem, F., Meyer, M., & Jäncke, L. (2014). Music and language
expertise influence the categorization of speech and musical sounds: Behavioral and
electrophysiological measurements. Journal of Cognitive Neuroscience, 26(10), 2356-
2369.
Elmer, S., Meyer, M., & Jäncke, L. (2010). Simultaneous interpreters as a model for neuronal
adaptation in the domain of language processing. Brain Research, 1317, 147-156.
Elmer, S., Meyer, M., Marrama, L., & Jäncke, L. (2011). Intensive language training and
attention modulate the involvement of fronto-parietal regions during a non-verbal
auditory discrimination task. European Journal of Neuroscience, 34, 165-175.
Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). G*Power 3: A flexible statistical power
analysis program for the social, behavioral, and biomedical sciences. Behavior Research
Methods, 39(2), 175-191.
Fujioka, T., Ross, B., Kakigi, R., Pantev, C., & Trainor, L. J. (2006). One year of musical
training affects development of auditory cortical-evoked fields in young children. Brain,
129, 2593-2608.
Golestani, N., Price, C. J., & Scott, S. K. (2011). Born with an ear for dialects? Structural
plasticity in the expert phonetician brain. Journal of Neuroscience, 31(11), 4213-4220.
Hervais-Adelman, A., Moser-Mercer, B., Michel, C. M., & Golestani, N. (2014). fMRI of
simultaneous interpretation reveals the neural basis of extreme language control.
Cerebral Cortex, Advance online publication doi:10.1093/cercor/bhu158
Huss, M., Verney, J. P., Fosker, T., Mead, N., & Goswami, U. (2011). Music, rhythm, rise time
perception and developmental dyslexia: Perception of musical meter predicts reading and
phonology. Cortex, 47, 674-689.
35
Hyde, K. L., Lerch, J., Norton, A., Forgeard, M., Winner, E., Evans, A. C., & Schlaug, G.
(2009). Musical training shapes structural brain development. Journal of Neuroscience,
29(10), 3019-3025.
Kaviani, H., Mirbaha, H., Pournaseh, M., & Sagan, O. (2014). Can music lessons increase the
performance of preschool children in IQ tests? Cognitive Processing, 15, 77-84.
Köpke, B., & Signorelli, T. M. (2011). Methodological aspects of working memory assessment
in simultaneous interpreters. International Journal of Bilingualism, 16(2), 183-197.
Kovács, Á. M., & Mehler, J. (2009). Flexible learning of multiple speech structures in bilingual
infants. Science, 325, 611-612.
Kraus, N., McGee, T., Carrell, T. D., King, C., Tremblay, K., & Nicol, T. (1995). Central
auditory system plasticity associated with speech discrimination training. Journal of
Cognitive Neuroscience, 7(1), 25-32.
Krizman, J., Marian, V., Shook, A., Skoe, E., & Kraus, N. (2012). Subcortical encoding of sound
is enhanced in bilinguals and relates to executive function advantages. Proceedings of the
National Academy of Sciences, 109(20), 7877-7881.
Loui, P., Wessel, D. L., & Hudson Kam, C. L. (2010). Humans rapidly learn grammatical
structure in a new musical scale. Music Perception, 27(5), 377-388.
Luk, G., Anderson, J. A., Craik, F. I., Grady, C., & Bialystok, E. (2010). Distinct neural
correlates for two types of inhibition in bilinguals: Response inhibition versus
interference suppression. Brain and Cognition, 74, 347-357.
Macnamara, B. N., & Conway, A. R. A. (2014). Novel evidence in support of the bilingual
advantage: Influences of task demands and experience on cognitive control and working
memory. Psychonomic Bulletin and Review, 21, 520-525.
Marie, C., Kujala, T., & Besson, M. (2012). Musical and linguistic expertise influence pre-
attentive and attentive processing of non-speech sounds. Cortex, 48, 447-457.
Milner, B. (1963). Effects of different brain lesions on card sorting. Archives of Neurology, 9(1),
90-100.
Moreno, S., Bialystok, E., Barac, R., Schellenberg, E. G., Cepeda, N. J., & Chau, T. (2011).
Short-term music training enhances verbal intelligence and executive function.
Psychological Science, 22(11), 1425-1433.
Moreno, S., & Bidelman, G. M. (2014). Examining neural plasticity and cognitive benefit
through the unique lens of musical training. Hearing Research, 308, 84-97.
Pallesen, K. J., Brattico, E., Bailey, C. J., Korvenoja, A., Koivisto, J., Gjedde, A., & Carlson, S.
(2010). Cognitive control in auditory working memory is enhanced in musicians. Plos
One, 5(6), e11120. doi:10.1371/journal.pone.0011120
Patel, A. D. (2008). Music, language and the brain. New York: Oxford University Press.
Patel, A. D. (2011). Why would musical training benefit the neural encoding of speech? The
OPERA hypothesis. Frontiers in Psychology, 2, 142. doi:10.3389/fpsyg.2011.00142
Patel, A. D. (2014). Can nonlingistic musical training change the way the brain processes
speech? the expanded OPERA hypothesis. Hearing Research, 308, 98-108.
36
Peretz, I., & Coltheart, M. (2003). Modularity of music processing. Nature Neuroscience, 6(7),
688-691.
Peretz, I., Gagnon, L., Hébert, S., & Macoir, J. (2004). Singing in the brain: Insights from
cognitive neuropsychology. Music Perception, 21(3), 373-390.
R Core Team. (2015). R: A language and environment for statistical computing. Vienna, Austria:
R Foundation for Statistical Computing.
Rhodes, M. G. (2004). Age-related differences in performance on the wisconsin card sorting test:
A meta-analytic review. Psychology and Aging, 19(3), 482-494.
Rinne, J. O., Tommola, J., Laine, M., Krause, B. J., Schmidt, D., Kaasinen, V., Teräs, M., Sipilä,
H, & Sunnari, M. (2000). The translating brain: Cerebral activation patterns during
simultaneous interpreting. Neuroscience Letters, 294, 85-88.
Rochette, F., Moussard, A., & Bigand, E. (2014). Music lessons improve auditory perceptual and
cogntiive performance in children. Frontiers in Human Neuroscience, 8, 488.
doi:doi:10.3389/fnhum.2014.00488
Rodrigues, A. C., Loureiro, M. A., & Caramelli, P. (2013). Long-term musical training may
improve different forms of visual attention ability. Brain and Cognition, 82, 229-235.
Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month old infants.
Science, 274, 1926-1928.
Saffran, J. R., Johnson, E. K., Aslin, R. N., & Newport, E. L. (1999). Statistical learning of tone
sequences by human infants and adults. Cognition, 70, 27-52.
Schellenberg, E. G. (2004). Music lessons enhance IQ. Psychological Science, 15(8), 511-514.
Song, J. H., Skoe, E., Wong, P. C. M., & Kraus, N. (2008). Plasticity in the adult human auditory
brainstem following short-term linguistic training. Journal of Cognitive Neuroscience,
20(10), 1892-1902.
Strait, D. L., O'Connell, S., Parbery-Clark, A., & Kraus, N. (2014). Musicians' enhanced neural
differentiation of speech sounds arises early in life: Developmental evidence from ages 3
to 30. Cerebral Cortex, 24(9), 2512-2521.
Tan, Y. T., McPherson, G. E., Peretz, I., Berkovic, S. F., & Wilson, S. J. (2014). The genetic
basis of music ability. Frontiers in Psychology, 5, 658. doi:10:3389/fpsyg.2014.00658
Tervaniemi, M., Jacobsen, T., Röttger, S., Kujala, T., Widmann, A., Vainio, M., Näätänen, R., &
Schröger, E. (2006). Selective tuning of cortical sound-feature processing by language
experience. European Journal of Neuroscience, 23, 2538-2541.
Tierney, A., Krizman, J., Skoe, E., Johnston, K., & Kraus, N. (2013). High school music classes
enhance the neural processing of speech. Frontiers in Psychology, 4, 855.
doi:10.3389/fpsyg.2013.00855
Tillman, B., & Poulin-Charronnat, B. (2010). Auditory expectations for newly-acquired
structures. The Quarterly Journal of Experimental Psychology, 63(8), 1646-1664.
Tzou, Y., Eslami, Z. R., Chen, H., & Vaid, J. (2011). Effect of language proficiency and degree
of formal training in simultaneous interpreters on working memory and interpreting
performance: Evidence from mandarin-english speakers. International Journal of
Bilingualism, 16(2), 213-227.
37
Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical experiences
shapes human brainstem encoding of linguistic pitch patterns. Nature Neuroscience,
10(4), 420-422.
Yudes, C., Macizo, P., & Bajo, T. (2011). The influence of expertise in simultaneous interpreting
on non-verbal executive processes. Frontiers in Psychology, 2, 309.
doi:10.3389/fpsyg.2011.00309
Zatorre, R. J. (2013). Predispositions and plasticity in music and speech learning: Neural
correlates and implications. Science, 342, 585-589.
Zatorre, R. J., & Baum, S. R. (2012). Musical melody and speech intonation: Singing a different
tune? PLoS Biology, 10(7), e1001372. doi:10.1371/journal.pbio.1001372
Zendel, B. R., & Alain, C. (2012). Musicians experience less age-related decline in central
auditory processing. Psychology and Aging, 27(2), 410-417.
Zuk, J., Benjamin, C., Kenyon, A., & Gaab, N. (2014). Behavioural and neural correlates of
executive functioning in musicians and non-musicians. Plos One, 9(6), e99868.
doi:10.1371/journal.pone.0099868
38
Appendix
Appendix A: Initial analysis with three groups
To follow with the original design of the study, an analysis was run with the participants split
into three groups: SIs (N = 17), non-SI bilinguals (N = 14) and English monolinguals (N = 3).
These results should only be considered preliminary due to the uneven distribution of
participants in the groups. One-way ANOVAs were conducted, with the between-subjects Group
factor having three levels (SIs, bilinguals and English monolinguals). Because the study is
incomplete and severely underpowered statistically, these findings should be interpreted with
caution.
Task Performance
Sensory auditory processing: The omnibus ANOVA for the hearing threshold was not significant
(F(2, 31) = 0.8062, p = 0.4573). No significant results were found with either the omnibus
ANOVA for the mistuned harmonic task (F(2, 31) = 1.223, p = 0.308) or the gap detection task
(F(2, 31) = 0.927, p = 0.407).
Cognitive auditory processing: The omnibus ANOVA for the QuickSIN was found to be
significant (F(2, 31) = 6.643, p = 0.004). When self-rated English listening ability was included
in the model, the main effect of Group remained significant (F(2, 30) = 7.530, p = 0.002).
No significant differences were found in performance on the pitch memory task (F(2, 31) =
1.148, p = 0.330). A second analysis was conducted after removing 4 native tone-language
speakers from the analysis (all in the bilingual group). Again, no significant differences were
found after these participants were removed (F(2, 27) = 0.332, p = 0.720).
Executive function: The 2 x 3 ANOVA revealed a significant main effect of Condition (F(2, 93)
= 243.358, p < 0.001) but not of Group (F(2, 93) = 1.140, p = 0.324) and no Group x Condition
interaction (F(4, 93) = 0.183, p = 0.947).
No significant differences were found in performance on the WCST, on either the total correct
(F(2, 31) = 0.231, p = 0.795) or number of categories completed (F(2, 31) = 0.279, p = 0.759).
Like in the two-group analysis, a separate analysis was conducted using measures derived from
39
the original data that were similar to those of Yudes and colleagues (2011). Despite this, there
were still no significant differences found on the number of cards required to complete six
categories (F(2, 31) = 0.590, p = 0.561) or the number of errors (F(2, 31) = 0.191, p = 0.827).