11
Research Report Neural substrates of knowledge of hand postures for object grasping and functional object use: Evidence from fMRI Laurel J. Buxbaum a,b, , Kathleen M. Kyle a , Kathy Tang c , John A. Detre c a Moss Rehabilitation Research Institute, Korman 213, 1200 W. Tabor Road, Philadelphia, PA 19141, USA b Thomas Jefferson University, Philadelphia, PA 19107, USA c University of Pennsylvania, Philadelphia, PA 19104, USA ARTICLE INFO ABSTRACT Article history: Accepted 3 August 2006 Available online 7 September 2006 A number of lines of evidence suggest that computation of hand posture differs for object grasping as compared to functional object use. Hand shaping for grasping appears to rely strongly upon calculations of current object location and volume, whereas hand shaping for object use additionally requires access to stored knowledge about the skilled manipulation specific to a given object. In addition, the particular hand postures employed for functional object use may be either prehensile (clenching, pinching) or non-prehensile (e.g., palming, poking), in contrast to the prehensile postures that are obligatory for grasping. In this fMRI study, we assessed the hypothesis that a left-hemisphere-lateralized system including the inferior parietal lobe is specifically recruited for the computation and recognition of hand postures for functional object use. Fifteen subjects viewed pictures of manipulable objects and determined whether they would be grasped with a pinch or clench (Grasp condition), functionally used with a pinch or clench (Prehensile Use condition), or functionally used with a palm or poke hand posture (Non-prehensile Use condition). Despite the fact that the conditions were equated for behavioral difficulty, significantly greater activations were observed in the left inferior frontal gyrus (IFG), posterior superior temporal gyrus (STG), and inferior parietal lobule (IPL) in Non-prehensile Use trials as compared to Grasp trials. Comparison of Non-prehensile Use and Prehensile Use activations revealed significant differences only in the left IPL. These data confirm the importance of the left IPL in storing knowledge of hand postures for functional object use, and have implications for understanding the interaction of dorsal and ventral visual processing systems. © 2006 Elsevier B.V. All rights reserved. Keywords: Hand posture Tool Grasp Manipulation Praxis Gesture Object knowledge Tool use Dorsal stream 1. Introduction Consider the cognitive operations and hand movements entailed in grasping a pocket calculator and using it to perform a mathematical operation. The grasping movement requires calculation of the location of the calculator vis a vis the body and hand and the calculator's volume, with the aim of computing an appropriately scaled prehensile posture. The use operation, in contrast, requires knowledge about the precise part of the calculator affording performance of the BRAIN RESEARCH 1117 (2006) 175 185 Corresponding author. Moss Rehabilitation Research Institute, Korman 213, 1200 W. Tabor Rd., Philadelphia, PA 19141, USA. Fax: +1 215 456 5926. E-mail address: [email protected] (L.J. Buxbaum). 0006-8993/$ see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.brainres.2006.08.010 available at www.sciencedirect.com www.elsevier.com/locate/brainres

Neural substrates of knowledge of hand postures for object grasping and functional object use: Evidence from fMRI

Embed Size (px)

Citation preview

B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

ava i l ab l e a t www.sc i enced i r ec t . com

www.e l sev i e r. com/ l oca te /b ra in res

Research Report

Neural substrates of knowledge of hand postures for objectgrasping and functional object use: Evidence from fMRI

Laurel J. Buxbauma,b,⁎, Kathleen M. Kylea, Kathy Tangc, John A. Detrec

aMoss Rehabilitation Research Institute, Korman 213, 1200 W. Tabor Road, Philadelphia, PA 19141, USAbThomas Jefferson University, Philadelphia, PA 19107, USAcUniversity of Pennsylvania, Philadelphia, PA 19104, USA

A R T I C L E I N F O

⁎ Corresponding author. Moss Rehabilitation456 5926.

E-mail address: [email protected] (

0006-8993/$ – see front matter © 2006 Elsevidoi:10.1016/j.brainres.2006.08.010

A B S T R A C T

Article history:Accepted 3 August 2006Available online 7 September 2006

A number of lines of evidence suggest that computation of hand posture differs for objectgrasping as compared to functional object use. Hand shaping for grasping appears to relystrongly upon calculations of current object location and volume, whereas hand shaping forobject use additionally requires access to stored knowledge about the skilled manipulationspecific to a given object. In addition, the particular hand postures employed for functionalobject use may be either prehensile (clenching, pinching) or non-prehensile (e.g., palming,poking), in contrast to the prehensile postures that are obligatory for grasping. In this fMRIstudy, we assessed the hypothesis that a left-hemisphere-lateralized system including theinferior parietal lobe is specifically recruited for the computation and recognition of handpostures for functional object use. Fifteen subjects viewed pictures of manipulable objectsand determined whether they would be grasped with a pinch or clench (Grasp condition),functionally used with a pinch or clench (Prehensile Use condition), or functionally usedwith a palm or poke hand posture (Non-prehensile Use condition). Despite the fact that theconditions were equated for behavioral difficulty, significantly greater activations wereobserved in the left inferior frontal gyrus (IFG), posterior superior temporal gyrus (STG), andinferior parietal lobule (IPL) in Non-prehensile Use trials as compared to Grasp trials.Comparison of Non-prehensile Use and Prehensile Use activations revealed significantdifferences only in the left IPL. These data confirm the importance of the left IPL in storingknowledge of hand postures for functional object use, and have implications forunderstanding the interaction of dorsal and ventral visual processing systems.

© 2006 Elsevier B.V. All rights reserved.

Keywords:Hand postureToolGraspManipulationPraxisGestureObject knowledgeTool useDorsal stream

1. Introduction

Consider the cognitive operations and hand movementsentailed in grasping a pocket calculator and using it to performa mathematical operation. The grasping movement requires

Research Institute, Korma

L.J. Buxbaum).

er B.V. All rights reserved

calculation of the location of the calculator vis a vis the bodyand hand and the calculator's volume, with the aim ofcomputing an appropriately scaled prehensile posture. Theuse operation, in contrast, requires knowledge about theprecise part of the calculator affording performance of the

n 213, 1200 W. Tabor Rd., Philadelphia, PA 19141, USA. Fax: +1 215

.

176 B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

desired function (e.g., the keys), and the movement of thoseparts in service of a non-prehensile poking movement of theforefinger.

In daily life, structural and functional information is likelyto richly interact in our use of familiar objects. There isevidence, however, that in neuropsychological populations,performance in scaling hand posture for grasping versusfunctional object use may doubly dissociate. Patient L.L.,reported by Sirigu et al. (1995), had a specific impairment ofhand posture for using familiar objects, but performedflawlessly with the same objects in a reaching and graspingtask. She was also unable to discriminate correct versusincorrect functional hand postures. In describing L.L.'sgrasping performance with familiar objects, the authorsnote, “…the wrist's orientation matched the object's, andthe size of the finger grip was highly correlated with the sizeof the grasped portion of the object (r=0.87). As these lastresults indicate, L.L.'s incorrect hand posture…seems to bespecific to gestures involving the use of objects” (p. 46). Theinvestigators further go on to postulate deficits in knowledge“specific to the complex manual configurations associatedwith the use of tools” (p. 53).

These data are consistent with work from our laboratory(Buxbaum et al., 2003) showing that in patients with leftinferior parietal damage, selection of grasp for use wasconsistently biased toward the posture appropriate for pickingup the object. In other words, we observed preservation ofgrasp posture and disruption of use posture with familiarobjects. We have also recently demonstrated that IM patientsmay be strikingly impaired in recognizing the hand posturecomponent of functional object-related gestures performed byothers (Buxbaum et al., 2005). These data indicate that therepresentations underlying functional hand posture knowl-edge are deficient in many cases of left inferior parietal IM.

In contrast to this is the pattern of performance observed inoptic ataxia, a disorder of visually guided reaching oftenoccurring subsequent to superior parietal and intraparietalsulcus damage. Glover (2004) posits that optic ataxia is a deficitspecific to the on-line control of actions. Patients with opticataxia reported by Perenin andVighetto (1988), Jeannerod et al.(1994), and Jakobson et al. (1991) have severe problems ingrasping a variety of objects, some familiar, while showing noevidence of hand posture deficits on functional object Usetasks. Perenin and Vighetto (1988) specifically report apraxiain 4 of 10 optic ataxia patients; functional interaction withobjects in the remaining 6 patients can probably be assumedto have been unremarkable, i.e., the patients were apparentlyable to position their hands appropriately for using objects.Supporting this possibility is a study from Karnath andPerenin (2005) reporting apraxia in 2 of 10 left hemisphere-lesioned and 0 of 6 right hemisphere-lesioned optic ataxics.

Behavioral studies in healthy subjects are also suggestiveof differences in the programming of hand posture for use ascompared to grasp. There are a number of lines of evidencethat programming of hand posture shape for object graspingto lift and move objects has very limited access to storedknowledge. For example, Gordon et al. (1993) demonstratedthat when asked to lift objects of unexpected weights, healthysubjects' hand aperture is appropriate even on Trial 1. Thus,current visual information about object structure is used to

scale grip aperture. On the other hand, vertical lifting force andgrip force do not become appropriate until later trials,reflecting the incorporation of experience in programmingforce appropriate to lifting the experimental objects. This isconsistent with several studies showing that memory-guidedbut not visually guided grasping of novel objects can beprimed by passive viewing or grasping of primes of variousshapes and orientations (Cant et al., 2005; Garofeanu et al.,2004). Thus, the visuomotor processes involved in program-ming hand shape for visually guided grasping appear to relyon “moment to moment” computations (Garofeanu et al.,2004, p. 55). This appears to be computationally efficient giventhat different exemplars of the same object may differ in size,current location, and orientation with respect to the subject.

Recently, Creem and Proffitt (2001) demonstrated thatfunctional use postures are more likely to rely upon storedsemantic information than grasp postures. These investiga-tors showed that when subjects are asked to grasp familiarobjects that are oriented away from their bodies (e.g., a panwith handle far from the subject), they characteristically reacharound the object to grasp its functional part. In contrast,when performing a secondary task that uses semanticresources (but critically, not an equally difficult non-semanticsecondary task), subjects more often fail to reach to thefunctional object part, instead grasping the closest edge. In arelated study, Klatzky et al. (1987) have demonstrated thathealthy subjects could reliably rate which of 4 hand config-urations (poke, pinch, palm, or clench) were associated withobjects in three classes of functional context: hold/pick up,feel/touch, and use. Moreover, the investigators demonstratedthat object structure alone was not sufficient to predictsubjects' knowledge of the hand postures associated withuse of the objects. For example, a discriminant function basedon the shape of novel objects tended to assign certain shallow,flat real objects (e.g., nail, paperclip, zipper) to a “poke”category, although they are used functionally with a “pinch”movement. Thus, functional use knowledge in some casessuperseded the hand posture predicted on the basis of objectstructure alone.

A recent functional magnetic resonance imaging (fMRI)study (Creem-Regehr and Lee, 2005) examined differences inneural activation for imagined grasping of familiar tools ascompared to novel shapes, and found that imagined toolgrasping activated left inferior parietal lobe (angular gyrus)and left posterior middle temporal gyrus more than didimagined novel shape grasping. No distinction was madebetween tools for which grasp and use postureswere the sameversus different. In this study we sought to extend thesefindings by providing evidence on the role of neural structuresinvolved in cognitive representation of functional use handpostures when the task was a judgment not explicitlyrequiring imagined use. The Use task required a decisionabout whether a depicted object would be functionally usedwith a poke, pinch, palm, or clench, and the Grasp taskentailed assessment of whether an object would be handed tosomeone with a pinch or clench. Note that the Use taskentailed consideration of both prehensile and non-prehensilepostures, whereas the Grasp task, of necessity, assessed onlyprehensile postures. Thus, the design enabled comparisonsbetween Non-prehensile Use, Prehensile Use, and Grasp.

177B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

We had two strong predictions. First, on the assumptionthat cognitive judgments about hand–object interactionsrequire object recognition and motor imagery, we predictedthat both Grasp and Use (both Prehensile and Non-prehensile)should activate an overlapping left hemisphere networkinvolved in (1) tool recognition, including left ventral premotorand posterior temporal gyrus (Chao and Martin, 2000; Gerlachet al., 2002; Grafton et al., 1996; Perani et al., 2001), (2) planningand imagery for arm and hand movement, including pre-motor, prefrontal, left posterior parietal, IPS, SPL (Creem-Regehr and Lee, 2005; Gerardin et al., 2000; Johnson et al., 2002;Johnson-Frey et al., 2005; Shmuelof and Zohary, 2005). Second,because the actions associated with Non-prehensile Use areunambiguously associated with object use, and not grasp, weexpected that the comparison of Non-prehensile Use versusGrasp should reveal greater activation in the left IPL (Boronatet al., 2005; Buxbaum et al., 2003, 2005, in press; Buxbaum andSaffran, 2002; Johnson-Frey et al., 2005).

Several other comparisons afforded by the study designwere more exploratory in nature. Based on the assumptionthat the activations in these tasks reflect imagery of a precisehand posture, one possibility was that the activations inPrehensile Use and Grasp would be highly similar, as bothentail imagery of a pinch- or clench-shaped posture (Johnsonet al., 2002; Johnson-Frey et al., 2005). On the other hand,based in part on evidence from patients that prehensile usepostures may be impaired in the context of intact grasping(see Buxbaum et al., 2005 for discussion), we also consideredthat imagining a prehensile movement in the context of aUse decision might activate different regions than imagininga prehensile movement in the context of a Grasp decision. Inthat event, it might be the case that the Prehensile Usecondition would more robustly activate ventral streamstructures on the left, including the posterior temporalgyrus, as well as the IPL, whereas the Grasp task mightreveal relatively more activation in dorsal stream structuresincluding the anterior intraparietal sulcus (AIP) and bilateralsuperior parietal lobes (Buxbaum and Coslett, 1997; Buxbaumand Coslett, 1998; Culham et al., 2003; Perenin and Vighetto,1988).

Fig. 1 – Effect sizes in a priori neuroanatomically determinedROIs for the contrasts of grasp versus baseline and NPUversus baseline for the left hemisphere (top) and righthemisphere (second from top), and the contrasts of PU versusbaseline and NPU versus baseline for the left hemisphere(third from top) and right hemisphere (bottom).

2. Results

2.1. Behavioral data

Accuracy andRTdata for PrehensileUse (pooledUse/Pinch andUse/Clench trials, hereafter denoted PU), Non-prehensile Use(pooled Use/Poke and Use/Palm trials, hereafter NPU) andGrasp (Give/Pinch andGive/Clench) trials are shown in Table 1.Conditions were equivalent in terms of accuracy, F(2,42)=0.6,p=0.57, and RTs, F(2,42)=0.7, p=0.50. All participants per-

Table 1 – Mean reaction time, error rate, and standarddeviation for experimental conditions

Prehensile use Non-prehensile use Grasp

RT 945 (102) 899 (105) 918 (109)% Correct 88.3 (7.1) 90.9 (5.9) 89.2 (7.7)

formed above chance levels in all 3 conditions, as determinedby the Binomial Test (z (96) >1.65, p<0.05).

2.2. fMRI activation data

2.2.1. Individual neuroanatomic ROI analysesFig. 1 shows the mean t values (effect sizes) for each of the 10left and right hemisphere individually drawn ROIs for theNPU-G contrast and NPU-PU contrast.

This analysis revealed left hemisphere activation and righthemisphere deactivation in several regions. In the comparison

178 B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

of Grasp and NPU, three left hemisphere ROIs showedsignificantly greater activation in the NPU condition: IFG/MFG, STG/MTG, and IPL. In addition, right STG/MTG wassignificantly more deactivated in the Grasp condition. In thecomparison of PU and NPU, significantly greater activation inthe NPU condition was observed only in the left IPL. No otherdifferences between any of the conditions were significant.Table 2 shows the Talairach coordinates of peak activationsfor the 5 left hemisphere ROIs.

2.2.2. Group analyses of within condition contrastsThe second set of fMRI analyses were based on examinationsof whole-brain group map images, which showed activationfor specific contrasts on a normalized brain. Table 3 showsTalairach coordinates of peak activation and geometric centerof activation in each condition.

2.2.2.1. Grasp–baseline. Two large areas of activation wereobserved in the left hemisphere in several regions, one in anarea including the inferior frontal gyrus (IFG) and middlefrontal gyrus (MFG), and a second including the occipital lobe,angular gyrus (AG) of the inferior parietal lobe, and intrapar-ietal sulcus (IPS). There was also a significant right hemi-sphere area of activation in the MFG.

2.2.2.2. NPU–baseline. The pattern of activation was similarto that observed in the previous condition. However, the leftoccipito-parietal activation was even larger, comprising morethan 21 ml. As in the previous analysis, there was a secondlarge area of activation in the left IFG/MFG. There were nosignificant right hemisphere activations.

2.2.2.3. PU–baseline. In addition to the large left IFG/MFGand occipital–parietal activations seen in the previous ana-lyses, there were smaller areas of activation in the thalamus,right AG, and right MFG.

2.2.3. Functional ROI analysis of between-conditioncontrastsThe third set of analyses used the omnibus activations fromthe beta map for all conditions minus baseline (i.e., PU+NPU+

Table 2 – Mean peak activations in left hemisphere ROIsdrawn on individual brains

ROI x y z BA

Non-prehensile use IFG/MFG 40 −3 31 6IPL 41 −63 34 39STG/MTG 42 −72 −6 37IPS 35 −66 37 7SPL 31 −66 38 7

Prehensile use IFG/MFG 38 −1 34 6IPL 42 −58 38 39STG/MTG 42 −82 −5 37IPS 37 −63 38 7SPL 33 −63 39 7

Grasp IFG/MFG 42 −1 31 6IPL 44 −57 38 39STG/MTG 44 −75 −6 37IPS 39 −62 39 7SPL 33 −65 41 7

G)-B, to define functional ROIs. Fig. 2 shows the ROIs soidentified, and Table 3 (bottom) shows statistically significantresults of the between-condition contrasts.

2.2.3.1. NPU versus PU. Significantly greater activation inthe NPU condition was observed in the left inferior andmiddletemporal gyri (ITG/MTG), the left IFG/MFG, and the right ITG/MTG. There was a trend toward greater NPU activation in theleft AG and IPS. There were no regions with greater activationin the PU condition.

2.2.3.2. NPU versus grasp. A trend (.07) toward greateractivation in the NPU condition was observed in the left AGand IPS. No regionswere significantlymore active in the Graspcondition.

3. General discussion

Performing cognitive judgments about hand postures appro-priate for two different tasks – object grasping and func-tional object use – activates an overlapping network ofregions in frontal, temporal, and parietal areas in the lefthemisphere. These activations are consistent with thoseobserved in numerous other neuroimaging studies withrecognition of tool stimuli (Binkofski et al., 1999; Boronat etal., 2005; Choi et al., 2001; Gerlach et al., 2002; Grafton et al.,1997; Perani et al., 1995, 1999; and see Creem-Regehr andLee, 2005 for review), complex gesture planning/program-ming (Creem-Regehr and Lee, 2005; Grezes and Decety, 2002;Johnson-Frey et al., 2005; Muhlau et al., 2005; Rumiati et al.,2005; Tanaka and Inui, 2002; see Muhlau et al., 2005 forreview), and imagined hand movements (Deiber et al., 1998;Gerardin et al., 2000; Moll et al., 2000; Stephan et al., 1995).Importantly, despite equation of the grasp and use judgmenttasks for behavioral difficulty, two different analyses – onebased on functional activation and the other on a priorineuroanatomic ROIs – revealed that judgments about non-prehensile functional object use (NPU) produce relativelygreater activation than both prehensile use (PU) and Graspjudgments in several left hemisphere regions, including theinferior, superior, and middle temporal lobe, inferior andmiddle frontal gyri, and inferior parietal lobe/intraparietalsulcus. In both analyses, the left IPL emerged as a focus ofactivation for NPU judgments. In the analysis based onneuroanatomic ROIs, significant differences between NPUand PU were observed only in the left IPL. There was alsogreater right ITG activation in the NPU than PU condition;reasons for this are unclear, but may reflect differences inthe stimuli used in the two conditions.

In the comparisons of grasping and non-prehensile use,there were no regions uniquely activated for judgmentsabout grasping. There are three possible explanations forthis pattern of results. The first is that the representationsused to calculate hand posture for grasping are a subset ofthose employed for object use. This seems implausible froman evolutionary perspective, given that a large number ofvertebrate animals are capable of prehension (Bermejo andZeigler, 1989; Martin et al., 1995; Metz and Io, 2000), but feware capable of skilled object use. Additionally, this would be

Table 3 – Clusters of activation for judgments about object grasping and functional use

Contrast #Voxels

Peak activation(Talairach coordinates)

Geometric center(Talaraich coordinates)

x y z max. tval x y z Mn. tval

Grasp>baselineBilateral occipital, left angular gyrus, left intraparietal sulcus 5168 36 −54 −21 13.75 3 −69 −1 8.32Left inferior and middle frontal gyri 1066 48 3 30 11.24 38 9 29 7.47Right middle frontal gyrus 23 −33 0 27 8.12 −33 −1 30 6.72

Baseline>grasp (deactivations)Bilateral mesial orbital frontal, right superior frontal 1405 −15 36 33 −5.91 −4 45 11 −7.75Right middle and superior temporal gyri 186 −60 −9 −21 −5.94 −61 −25 −12 −7.28Left middle temporal gyrus 43 63 −27 0 −5.92 61 −26 −9 −6.29Posterior cingulate gyrus 645 −3 −30 48 −5.90 −1 −38 34 −6.79Left superior frontal gyrus 74 27 24 45 −5.92 22 32 45 −6.42Right angular gyrus 108 −63 −48 21 −5.91 −59 −47 30 −7.06Left angular gyrus 38 54 −54 30 −5.94 56 −58 29 −6.45

Non-prehensile use>baselineBilateral occipital, left angular gyrus, left intraparietal sulcus 7743 36 −48 −24 12.82 1 −64 −3 8.05Left inferior and middle frontal gyri 813 45 3 33 9.74 40 9 30 6.92

Baseline>non-prehensile use (deactivations)Bilateral mesial orbital frontal 674 0 48 30 −5.85 −4 45 3 −7.27Right angular gyrus 26 −60 −45 33 −5.89 −60 −50 32 −6.44

Prehensile use>baselineBilateral occipital, left angular gyrus, left intraparietal sulcus 5503 57 −30 39 11.49 3 −68 1 7.11Left inferior and middle frontal gyri 1326 48 0 33 11.89 30 7 36 6.79Thalamus 34 6 −24 −9 6.06 5 −26 −9 5.71Right angular gyrus 125 −27 −63 57 8.33 −27 −61 49 6.48Right middle frontal gyrus 98 −36 −12 57 7.78 −36 −12 55 6.23

Baseline>non-prehensile use (deactivations)Right superior temporal and supramarginal gyri 73 −57 −45 30 −5.49 −58 −48 34 −6.17Left supramarginal and angular gyri 45 51 −66 30 −5.50 55 −58 30 −6.25Left middle temporal gyrus 26 60 −18 −12 −5.49 59 −23 −13 −5.60

Non-prehensile use>prehensile useLeft angular gyrus and intraparietal sulcus (p=0.09) 88 27 −63 42 2.55 27 −63 45 1.37Left inferior/middle temporal gyri 786 51 −63 −9 6.43 36 −63 −15 2.46Left inferior and middle frontal gyri 55 42 3 30 4.22 48 3 33 2.91Right inferior temporal gyrus 764 −42 −57 −9 4.37 −36 −66 −15 1.99

Non-prehensile use>graspLeft angular gyrus and intraparietal sulcus (p=0.07) 88 30 −60 45 3.04 27 −63 45 1.36

179B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

inconsistent with the data from patients, reviewed earlier,showing that grasping of familiar objects and positioning thehand for functional use of familiar objects may doublydissociate. However, this possibility cannot be ruled out atpresent. The second possibility is that computations forobject grasping are automatically activated even in the Usetask. The evidence with respect to the automaticity of graspactivations is mixed. Tucker and Ellis (1998), for example,demonstrated that response latencies to respond on anobject classification task were shorter when the objects'handles (irrelevant to the classification task) were pointingtoward the responding hand. This demonstrates thatlaterality of hand choice is influenced by object structure,but does not speak to whether hand posture is automaticallyinfluenced by object structure. We (Pavese and Buxbaum,2002) showed that distractor objects affording the same

hand posture as a target object slowed responses to thetarget, but only when a reaching movement was plannedand programmed, and not when the response was a button-press on the table-top. Additional studies will be required tofurther clarify the role of automatic activation of grasprepresentations on tasks such as the one used in the presentstudy. The third possible reason for the absence of uniquegrasp activations is that the cognitive judgment task weemployed did not fully activate the prehensile grasp system,but instead required participation of only a portion of thatsystem, specifically, imagery and/or planning com-ponentsthat may be common to grasp and use tasks. We will expandupon this possibility below.

These data extend previous findings in several ways. First,they suggest that sensory and motor-related regions of cortexare activated not only when subjects explicitly imagine

Fig. 2 – Top: three-dimensional rendering of significant cortical activation (red/yellow) and deactivation (blue/green) for therandom effects group analyses for the subtraction of (PU+NPU+G)-B, p<0.005. Bottom: two-dimensional multi-slice view of allsignificant activation and deactivation for the same analysis.

180 B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

prehension of graspable objects based on object structure (e.g.,Creem-Regehr and Lee, 2005), but even when cognitivejudgments about grasp postures are required. Second, theypermit a similar extension with respect to the computation ofhand posture for skilled manipulation. Similar regions of theleft angular gyruswere activated in several recent fMRI studiesinvolving judgments or imagery of tool manipulation. Boronatet al. (2005) asked subjects to decide whether pictures of twoobjects (e.g., piano and computer keyboard) had the samemanipulation. The angular gyrus site was more activated insuch judgments than in judgments of the purpose (function)of objects. Johnson-Frey et al. (2005) also observed very similarleft angular gyrus activations in a task involving planning oftool use movements. The similarity in activation loci acrossthese studies suggests that the same underlying representa-tions may be involved in cognitive judgments and in motorplanning for skilled object use. This in turn indicates that thepresent tasks were probably solved through the use of motorimagery.

This raises the question of the relationship of thejudgment tasks we employed to real-life grasping and skilleduse actions requiring motor output. Motor imagery has been

associated with planning stages of motor production, and inparticular, with internal models that predict the sensoryconsequences of motor commands and specify the motorcommands required to achieve a given outcome. Thesemodels are modified based on practice and experience, andare requisite for motor learning and for the generation ofskilled actions (Rosenbaum et al., 1993; Wolpert, 1997;Wolpert et al., 2001). Execution stages of movement, incontrast, emphasize on-line control that is sensitive tocurrent environmental conditions. In motor execution stagesof action, sensory and proprioceptive information about thelocations of effectors and targets is compared to internalmodels of action. We propose that the hand posturejudgment tasks we employed engaged the movement plan-ning system. It is possible that additional functional–neuroanatomic differences between use and grasping actionsmight emerge in tasks requiring execution.

The present data also provide support for previousneuropsychological evidence that patients with left IPLdamage have particular deficits in tasks measuring access to(or integrity of) NPU postures (e.g., poking a calculator,palming a bongo drum) (e.g., Buxbaum et al., 2003). The left

181B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

IPL, we have suggested, is well-positioned to mediate betweenrepresentations of object identity calculated by the ventralstream and representations for action based on structuralobject properties calculated by the dorsal stream (Milner andGoodale, 1995). By conveying information about object identityto the dorsal stream, the left IPL allows selection of move-ments that are appropriate to an object's category member-ship, rather than merely to its structural attributes.

We may speculate that the skilled object-related actionrepresentations computed in the left IPL may be an elabora-tion of representations of hand–object prehensile interactionsencoded by evolutionarily more primitive systems in thedorsal stream, including the anterior portion of the IPS (theanterior intraparietal sulcus, AIP) and the inferior prefrontallobe (F5 inmonkeys; with a possible homologue of 44 and 45 inhumans) (Fagg and Arbib, 1998; Gallese et al., 1997; Murata etal., 1996, 1997, 2000). In the monkey, this circuit is active inresponse to the sight of manipulable objects, to views ofothers (conspecifics or humans) manipulating objects, andwhen the monkey manipulates objects (Buccino et al., 2004;Buxbaum et al., 2005; Fogassi et al., 1998; Rizzolatti and Fadiga,1998; Rizzolatti et al., 2002; Rizzolatti and Luppino, 2001).Critically, the observed activation is not invariably tied toobjects'meaningful function, but has been observed evenwithnovel shapes. Unlike the grasp system, which may bepreferentially engaged by the presence of objects, the IPL-based functional use systemmay store abstracted representa-tions of hand–object interactions that may be used inpantomime, motor imagery, and delayed action tasks (Bux-baum et al., 2005; Milner et al., 2001).

We have previously suggested that IM patients with IPLlesions exhibit a pathological over-reliance on grasp actionsafforded by structural object properties (Buxbaum et al., 2003).Recent evidence from healthy subjects suggests that use andgrasp actions are triggered concomitantly in tasks requiring anobject-related response, and that each type of action mayinterfere with the other (Bub et al., 2003; Bub et al., submittedfor publication). Bub and colleagues have suggested that in thecase of objects for which use and grasp hand postures are inconflict (i.e., NPU objects), skilled use may require theselection of use actions over the grasp and pinch actionsassociated with prehension (Bub et al., submitted for publica-tion). The IPL may play a critical role in this selection process.Future investigations in patients with IPL damage explicitlypitting use and grasp actions against one another may beuseful in assessing this hypothesis.

Fig. 3 – Examples of the photographs used in theexperiment. On Prehensile Use (PU) trials, use and grasp ofthe pictured object were both associated with the sameprehensile hand posture (e.g., hammer) whereas onNon-prehensile Use (NPU) trials, use of the pictured objectwas associated with a non-prehensile posture and graspwith a prehensile posture (e.g., computer keyboard).

4. Experimental procedure

4.1. Participants

Fifteen healthy participants from the university communityparticipated in the study (9 males, 6 females, mean age=22.27,range: 18–31). All participants were right handed and hadnormal or corrected-to-normal vision. None had a history ofneurologic or psychiatric symptoms. All gave informed con-sent to participate in accordance with the guidelines of theAlbert Einstein Healthcare Network and the University ofPennsylvania. Participants were paid for their participation.

4.2. Stimulus materials and procedure

4.2.1. Piloting of materialsWe performed a pilot study to select stimuli for the fMRI task.Twelve paid volunteers participated (9 females, 3 males; meanage, 22 years, range 21–27 years; mean education, 17 years,range 16–21 years; all right handed). Participants saw colorphotographic images of manipulable objects presented singlyon a computer screen and made decisions about how theobjects were grasped or used. There were thirty-six blocks oftrials, presented in 6 runs of 6 blocks each, with 20 trials in ablock, for a total of 720 trials.

Grasp and Use conditions were each comprised of severalblock types. In half the Grasp blocks, subjects indicatedwhether the viewed object would be handed to someonewith a pinch (example “yes” item: pencil), and in the remainingGrasp blocks, whether the object would be handed to someonewith a clench (example “yes” item: drinking glass). In the fourtypes of Use blocks, subjects indicated either whether theobject would be used with a clench (example “yes” item:clothes iron), used with a pinch (example “yes” item: nailclipper), usedwith apalm (example “yes” item: bongodrum), orused with a poke (example “yes” item: doorbell). (See Fig. 3).Each object appeared 4 times over the course of the experi-ment's 6 runs, twice as a “Yes” item and twice as a “No” item.Each block began with a 1000-ms cue designating the decisionto bemade (e.g.,Would you use this objectwith a poke?). Trialsbegan with a 2000-ms fixation cross, followed by stimuluspresentation with a maximum 10,000-ms duration; subjectresponses before the end of the 10,000 ms served to advancethe experiment to thenext trial. Participantswere instructed topush the right button to indicate a “Yes” response, and the leftbutton to indicate a “No” response. Throughout each trial, thecue question (e.g., “USE PINCH?”) appeared to the left of eachpictured object as a reminder. Prior to the beginning of theexperiment, participants received both oral and writteninstructions, and there were 4 practice trials at the beginningof each run.

4.2.2. Imaging taskReaction time and accuracy data from the pilot study wereused to select items for the imaging task. The goal of theselection process was to equate the experimental conditionsin terms of accuracy and response time. There were 96 trialseach in GRASP/pinch, GRASP/clench, USE/pinch, USE/clench,

182 B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

USE/palm, and USE/poke blocks (48 Yes; 48 No). There werealso 224 baseline trials consisting of the cue “LEFT” or “RIGHT”followed by a scrambled picture constructed by digitizing theoriginal drawings, segmenting into 1-cm squares, and ran-domly displacing the squares within the boundaries of theoriginal drawing.

There were 40 eight-trial experimental blocks for a total of320 experimental trials, and 28 8-item baseline blocks,completed over 4 runs. Order of blocks was counterbalancedwithin a run.

Fig. 4 provides a schematic of the sequence of events in theexperimental trials. During each block, 8 stimuli werepresented (300-ms fixation cross, 1600-ms stimulus). Eachblock began with a cue screen (1000-ms duration) indicatingthe question to which the participant had to respond. The cuealso appeared on the left-hand side of the object throughoutthe trial to serve as a reminder to the participant. In version A,participants pushed the right button if the answer was “yes”and the left button if the answer was “no”. In version B, thismapping was reversed. The 2 versions were randomizedacross subjects. On baseline trials subjects pressed the left orright response key depending on the instruction.

Participants received the same oral and written instruc-tions as pilot participants and completed 4 practice trials foreach experimental condition and 8 practice trials for thebaseline condition before beginning the scanning portion ofthe experiment.

4.3. fMRI acquisition

Images were acquired with a Siemens Trio 3 T MRI scannerusing the product head coil. T1-weighted MPRAGE (highresolution, 3D) for anatomical localization were acquired.Functional images were acquired using a gradient-echoechoplanar imaging pulse sequence with blood oxygen level-dependent (BOLD) contrast using the following parameters:TR/TE 3000ms/30ms, 64×64matrix size in a 192×192 FOV, flipangle of 90°, and in-plane resolution of 3 mm, for 21contiguous 3-mm axial slices. To approach brain tissue

Fig. 4 – Experimental s

steady-state magnetization, 4 dummy volumes (12 s) wereacquired before beginning functional scanning. These datawere not included in the analysis. Each participant had fourruns of 142 TRs (7 min, 5 s) images apiece. Across all runs,participants had 568 data points per voxel.

Stimulus presentation was generated using a MacintoshG3 computer and programmed in Psyscope v.1.0.2. Stimuliwere rear projected from an Epson 8100 3-CD projector to ascreen behind the magnet. The participant viewed thisscreen through an angled mirror attached to the head coil.Foam padding attached to the head coil was used to restricthead movement. The lights in the scanning room wereturned off. A fiber optic response pad (fORP, CurrentDesigns) was used to record participant responses andreaction times.

4.4. Data analysis

Functional data processing and analysis were performed onLinux workstations using the VoxBo fMRI data analysispackage (www.voxbo.org). BOLD volumes from each partici-pant were realigned using rigid body coordinate transforma-tion and sinc interpolation to bring each volume intoregistration with that participant's first functional volume(Friston et al., 1995). Movement parameter estimates from therealignment procedure were under 3-mm translation and 2°rotation for all participants. Data were smoothed with a 3DGaussian filter (4×4×4-mm FWHM) and normalized to theMNI template (Collins, 1994).

Time series data were filtered in the frequency domain toremove the single highest frequency and low frequencies upto but not including the fundamental task frequency, andsmoothed using an empirically derived estimate of thehemodynamic response (Aguirre et al., 1998). Individualparticipant data were modeled using the modified GeneralLinear Model (mGLM) (Worsley and Friston, 1995), and a 1/festimate of intrinsic autocorrelation (Aguirre et al., 1997). Themodel included covariates for Grasp, Prehensile Use, and Non-prehensile Use.

equence of events.

183B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

We drew ten a priori neuroanatomically defined regions ofinterest (ROIs; 5 in eachhemisphere) (seeTable 2) on individualparticipant brains. These neuroanatomic ROIs were separatelyoverlaid on non-thresholded beta maps for PU-B, NPU-B, andG-B contrasts for each participant, and mean effect sizes (tvalues) were calculated for each ROI. Group analyses of theNPU-PU and NPU-G contrasts of interest were conducted byperforming paired t-tests on the effect sizes. (For similarmethods using effect sizes, see Kable et al., 2002; Postle et al.,2000; Thompson-Schill et al., 1999). One advantage of thisanalysis is that individual brains need not be normalized to acommon template, thus increasing neuroanatomic accuracyfor each subject. One drawback of this analysis is the loss ofspatial resolution that results from averaging across all voxelswithin the ROI. Because of the complimentary strengths andweaknesses of the two approaches, we performed both.

We also performed whole-brain analyses as follows. On amapwise level, beta maps were constructed for PrehensileUse–Baseline (PU-B), Non-prehensile Use–Baseline (NPU-B),Grasp–Baseline (G-B), PU-G, NPU-G, and PU-NPU contrasts.Group maps were constructed by second-tier analysis of betamaps, comparing the mean beta coefficient across partici-pants to 0 by t-test, for each voxel. Group maps werethresholded by permutation analysis (Nichols and Holmes,2002), using 1000 sign permutations of the participant betamaps to generate a null hypothesis distribution of peak tvalues. The 95th percentile peak t value was used to thresholdthe group maps for a mapwise false positive rate of 0.05.

In the whole-brain analysis just described, no voxelssurvived correction for multiple comparisons in thebetween-condition contrasts (PU-G, NPU-G, and PU-NPU). Toincrease sensitivity, we also conducted a data-based ROIanalysis in which we used the omnibus activations from thebeta map for all conditions minus baseline (PU+NPU+G)-B,arbitrarily thresholded to a mapwise criterion of 0.005, todefine functional ROIs1. For each subject, effect sizes wereestimated for each between-condition contrast (e.g., PU-NPU),independently for each voxel. Between-condition differenceswere assessed by averaging effect sizes across the regionseparately for each subject, and testing the difference betweenthe two conditions with paired t-tests across subjects.

Acknowledgments

Supported by NIH R01-NS36387, R01-NS045839, the NortheastCognitive Rehabilitation Research Network (NCRRN), and theAlbert Einstein Society of Albert Einstein Healthcare Network,Philadelphia, PA.

R E F E R E N C E S

Aguirre, G., Zarahn, E., D'Esposito, M., 1997. Empirical analysis ofBOLD fMRI statistics: II. Spatially smoothed data collected

1 The threshold used to determine functional ROIs is arbitrary.We also performed the analysis by defining ROIs at a threshold of0.01, and the results were highly similar.

under the null-hypothesis and experimental conditions.NeuroImage 5, 199–212.

Aguirre, G.K., Zarahn, E., D'Esposito, M., 1998. The variability ofhuman BOLD hemodynamic responses. NeuroImage 8, 360–369.

Bermejo, R., Zeigler, H., 1989. Prehension in the pigeon: II.Kinematic analysis. Exp. Brain Res. 75, 577–585.

Binkofski, F., Buccino, G., Posse, S., Seitz, R.J., Rizzolatti, G., Freund,H.-J., 1999. A fronto-parietal circuit for object manipulation inman: evidence from an fMRI-study. Eur. J. Neurosci. 11,3276–3286.

Boronat, C., Buxbaum, L., Coslett, H., Tang, K., Saffran, E., Kimberg,D., Detre, J., 2005. Distinctions between manipulation andfunction knowledge of objects: evidence from functionalmagnetic resonance imaging. Cogn. Brain Res. 23, 361–373.

Bub, D., Masson, M., Bukach, C., 2003. Gesturing and naming: theuse of functional knowledge in object identification. Psychol.Sci. 14, 467–472.

Bub, D., Masson, M. and Cree, G., submitted for publication.Evocation of functional and volumetric gestural knowledge byobjects and words.

Buccino, G., Binkofski, F., Riggio, L., 2004. The mirror neuronsystem and action recognition. Brain Lang. 89, 370–376.

Buxbaum, L.J., Coslett, H.B., 1997. Subtypes of optic ataxia:reframing the disconnection account. Neurocase 3, 159–166.

Buxbaum, L.J., Coslett, H.B., 1998. Spatio-motor representations inreaching: evidence for subtypes of optic ataxia. Cogn.Neuropsychol. 15, 279–312.

Buxbaum, L.J., Saffran, E.M., 2002. Knowledge of objectmanipulation and object function: dissociations in apraxic andnon-apraxic subjects. Brain Lang. 82, 179–199.

Buxbaum, L.J., Sirigu, A., Schwartz, M.F., Klatzky, R.L., 2003.Cognitive representations of hand posture in ideomotorapraxia. Neuropsychologia 41, 1091–1113.

Buxbaum, L., Kyle, K., Menon, R., 2005. On beyond mirror neurons:internal representations subserving production andrecognition of skilled object-related actions in humans. Cogn.Brain Res. 25.

Buxbaum, L.J., Kyle, K., Grossman, M. and Coslett, H.B., in press.Left inferior parietal representations for skilled hand–objectinteractions: evidence from stroke and corticobasaldegeneration. Cortex.

Cant, J.S., Westwood, D.A., Valyear, K.F., Goodale, M.A., 2005. Noevidence for visuomotor priming in a visually guided actiontask. Neuropsychologia 43, 216–226.

Chao, L.L., Martin, A., 2000. Representation of manipulableman-made objects in the dorsal stream. NeuroImage 12,478–484.

Choi, S., Na, D., Kang, E., Lee, K., Lee, S., Na, D., 2001. Functionalmagnetic resonance imaging during pantomiming tool-usegestures. Exp. Brain Res. 139, 311–317.

Collins, D.L., 1994. 3D Model-based Segmentation of IndividualBrain Structures from Magnetic Resonance Imaging Data.McGill University, Montreal, Canada.

Creem, S.H., Proffitt, D.R., 2001. Grasping objects by their handles:a necessary interaction between cognition and action. J. Exp.Psychol. Hum. Percept. Perform. 27, 218–228.

Creem-Regehr, S., Lee, J., 2005. Neural representations of graspableobjects: are tools special? Cogn. Brain Res. 22, 457–469.

Culham, J., Danckert, S., De Souza, J., Gati, J., Menon, R., Goodale,M., 2003. Visually guided grasping produces fMRI activation indorsal but not ventral stream areas. Exp. Brain Res. 153.

Deiber, M.P., Ibanez, V., Honda, M., Sadato, N., Raman, R., Hallett,M., 1998. Cerebral processes related to visuomotor imagery andgeneration of simple finger movements studied with positronemission tomography. NeuroImage 7, 73–85.

Fagg, A., Arbib, M.A., 1998. Modeling parietal–premotorinteractions in primate control of grasping. Neural Netw. 11,1277–1303.

Fogassi, L., Gallese, V., Fadiga, L., Rizzolatti, G., 1998. Neurons

184 B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

responding to the sight of goal-directed hand/arm actions inthe parietal area PF (7b) of the macaque monkey (abstract)Abstr.-Soc. Neurosci. 24, 654.

Friston, K., Ashburner, J., Poline, J., Frith, C., Heather, J.,Frackowiak, R., 1995. Spatial registration and normalization ofimages. Hum. Brain Mapp. 2, 165–189.

Gallese, V., Fadiga, L., Fogasi, L., Luppino, G., Murata, A., 1997. Aparietal–frontal circuit for hand grasping movements in themonkey: evidence from reversible inactivation experiments.In: Thier, P., Karnath, H.-O. (Eds.), Parietal Lobe Contributions toOrientation in 3-D Space. Springer-Verlag, Berlin.

Garofeanu, C., Kroliczak, G., Goodale, M.A., Humphrey, G.K., 2004.Naming and grasping common objects: a priming study. Exp.Brain Res. 159, 55–64.

Gerardin, E., Sirigu, A., Lehericy, S., Poline, J., Gaymard, B., Agid, Y.,Le Bihan, D., 2000. Partially overlapping neural networks forreal and imagined hand movements. Cereb. Cortex 10,1093–1094.

Gerlach, C., Law, I., Paulson, O., 2002. When action turns intowords. Activation of motor-based knowledge duringcategorization of manipulable objects. J. Cogn. Neurosci. 14,1230–1239.

Glover, S., 2004. Separate visual representations in the planningand control of action. Behav. Brain Sci. 27, 3–24(discussion 24–78).

Gordon, A.M., Westling, G., Cole, K.J., Johansson, R.S., 1993.Memory representations underlying motor commands usedduring manipulation of common and novel objects.J. Neurophysiol. 69, 1789–1796.

Grafton, S., Arbib, M.A., Fadiga, L., Rizzolatti, G., 1996. Localizationof grasp representations in humans by positron emissiontomography: II. Observation compared with imagination. Exp.Brain Res. 112, 103–111.

Grafton, S.T., Fadiga, L., Arbib, M.A., Rizzolatti, G., 1997. Premotorcortex activation during observation and naming of familiartools. NeuroImage 6, 231–236.

Grezes, J., Decety, J., 2002. Does visual perception of objects affordaction: evidence from a neuroimaging study.Neuropsychologia 40, 212–222.

Jakobson, L.S., Archibald, Y.M., Carey, D.P., Goodale, M.A., 1991. Akinematic analysis of reaching and grasping movements in apatient recovering from optic ataxia. Neuropsychologia 29,803–809.

Jeannerod, M., Decety, J., Michel, F., 1994. Impairment of graspingmovements following a bilateral posterior parietal lesion.Neuropsychologia 32, 369–380.

Johnson, S.H.R., M., Grafton, S.T., Hinrichs, H., Gazzaniga, M.S.,Heinze, H-J., 2002. Selective activation of a parieto-frontalcircuit during implicity imagined prehension. NeuroImage 17,1693–1704.

Johnson-Frey, S., Newman-Norland, R., Grafton, S., 2005. Adistributed network in the left cerebral hemisphere forplanning everyday tool use skills. Cereb. Cortex 15, 681–695.

Kable, J., Lease-Spellmeyer, J., Chatterjee, A., 2002. Neuralsubstrates of action event knowledge. J. Cogn. Neurosci. 14,795–805.

Karnath, H.O., Perenin, M.T., 2005. Cortical control of visuallyguided reaching: evidence from patients with optic ataxia.Cereb. Cortex 15, 1561–1569.

Klatzky, R.L., McCloskey, B., Doherty, S., Pellegrino, J.E.A., 1987.Knowledge about hand shaping and knowledge about objects.J. Mot. Behav. 19 (2), 187–213.

Martin, J., Cooper, S., Ghez, C., 1995. Kinematic analysis of reachingin the cat. Exp. Brain Res. 102, 379–392.

Metz, G., Io, W., 2000. Skilled reaching an action pattern: stabilityin rat (Rattus novegicus) grasping movements as a function ofchanging food pellet size. Behav. Brain Res. 116, 111–122.

Milner, A.D., Goodale, M.A., 1995. The Visual Brain in Action.Oxford Univ. Press, Oxford.

Milner, A.D., Dijkerman, H.C., Pisella, L., McIntosh, R.D., Tilikete, C.,Vighetto, A., Rossetti, Y., 2001. Grasping the past: delay canimprove visuomotor performance. Curr. Biol. 11, 1896–1901.

Moll, J., De Oliveira-Souza, R., Passman, L.J., Cunha, F.C.,Souza-Lima, F., Andreiuolo, P.A., 2000. Functional MRIcorrelates of real and imagined tool-use pantomimes.Neurology 54, 1331–1336.

Muhlau, M., Hermsdorfer, J., Goldenberg, G., Wohlschlager, A.M.,Castrop, F., Stahl, R., Rottinger, M., Erhard, P., Haslinger, B.,Ceballos-Baumann, A., Conrad, B., Boecker, H., 2005. Leftinferior parietal dominance in gesture imitation: an fMRIstudy. Neuropsychologia 43, 1086–1098.

Murata, A., Gallese, V., Kaseda, M., Sakata, H., 1996. Parietalneurons related to memory-guided hand manipulation.J. Neurophysiol. 75, 2180–2186.

Murata, A., Fadiga, L., Fogassi, L., Gallese, V., Raos, V., Rizzolatti, G.,1997. Object representation in the ventral premotor cortex(area F5) of the monkey. J. Neurophysiol. 78, 2226–2230.

Murata, A., Gallese, V., Luppino, G., Kaseda, M., Sakata, H., 2000.Selectivity for the shape, size, and orientation of objects forgrasping in neurons of monkey parietal area AIP. J.Neurophysiol. 83, 2580–2601.

Nichols, T.E., Holmes, A.P., 2002. Nonparametric permutation testsfor functional neuroimaging: a primer with examples. Hum.Brain Mapp. 15, 1–25.

Pavese, A., Buxbaum, L., 2002. Action matters: the role of actionplans and object affordances in selection for action. Vis. Cogn.9, 559–590.

Perenin, M.T., Vighetto, A., 1988. Optic ataxia: a specific disruptionin visuomotor mechanisms. Brain 111, 643–674.

Perani, D., Cappa, S.F., Bettinardi, V., Bresi, S., Gorno-Tempini, M.,Matarrese, M., Fazio, F., 1995. Different neural systems for therecognition of animals and man-made tools. NeuroReport 6,1637–1641.

Perani, D., Schnur, T., Tettamanti, M., Gorno-Tempini, M.,Cappa, S.F., Fazio, F., 1999. Word and picture matching: aPET study of semantic category effects. Neuropychologia 37,293–306.

Perani, D., Fazio, F., Borghese, N.A., Tettamanti, M., Ferrari, S.,Decety, J., Gilardi, M.C., 2001. Different brain correlates forwatching real and virtual hand actions. NeuroImage 14,749–758.

Postle, B., Zarahn, E., D'Esposito, M., 2000. Using event-relatedfMRI to assess delay-period activity during performance ofspatial and nonspatial working memory tasks. Brain Res.Protoc. 5 (1), 57–66.

Rizzolatti, G., Fadiga, L., 1998. Grasping objects and grasping actionmeanings: the dual role of monkey rostroventral premotorcortex (area F5)Sensory Guidance of Movement.

Rizzolatti, G., Luppino, G., 2001. The cortical motor system. Neuron31, 889–901.

Rizzolatti, G., Fogassi, L., Gallese, V., 2002. Motor and cognitivefunctions of the ventral premotor cortex. Curr. Opin. Neurobiol.12, 149–154.

Rosenbaum, D., Engelbrecht, S., Bushe, M., Loukopoulos, L., 1993.Knowledge model for selecting and producing reachingmovements. J. Mot. Behav. 25, 217–227.

Rumiati, R., Weiss, P., Tessari, A., Assmus, A., Zilles, K., Herzog, H.,Fink, G., 2005. Common and differential neural mechanismssupporting imitation of meaningful and meaningless actions.J. Cogn. Neurosci. 17, 1420–1431.

Shmuelof, L., Zohary, E., 2005. Dissociation between ventral anddorsal fMRI activation during object and action recognition.Neuron 47, 457–470.

Sirigu, A., Cohen, L., Duhamel, J.R., Pillon, B., Dubois, B., Agid, Y.,1995. A selective impairment of hand posture for objectutilization in apraxia. Cortex 31, 41–55.

Stephan, K.M., Fink, G.R., Passingham, R.E., Silbersweig, D.,Ceballos-Baumann, A.O., Frith, C.D., Frackowiak, R.S., 1995.

185B R A I N R E S E A R C H 1 1 1 7 ( 2 0 0 6 ) 1 7 5 – 1 8 5

Functional anatomy of the mental representation of upperextremity movements in healthy subjects. J. Neurophysiol. 73,373–386.

Tanaka, S., Inui, T., 2002. Cortical involvement for action imitationof hand/arm postures versus finger configurations: an fMRIstudy. NeuroReport 13, 1599–1602.

Thompson-Schill, S., Aguirre, G., D'Esposito, M., Farah, M., 1999. Aneural basis for category and modality specificity of semanticknowledge. Neuropsychologia 37, 671–676.

Tucker, M., Ellis, R., 1998. On the relations between seen objectsand components of potential actions. J. Exp. Psychol. Hum.Percept. Perform. 24, 830–846.

Wolpert, D.M., 1997. Computational approaches to motor control.Trends Cogn. Sci. 1, 209–216.

Wolpert, D.M., Ghahramani, Z., Flanagan, J., 2001. Perspectives andproblems in motor learning. Trends Cogn. Sci. 5, 487–494.

Worsley, K., Friston, K., 1995. Analysis of fMRI time-seriesrevisited—Again. NeuroImage 2, 173–182.