7
Intel Serv Robotics (2008) 1:135–141 DOI 10.1007/s11370-007-0013-0 SPECIAL ISSUE Sparse control for high-DOF assistive robots Odest Chadwicke Jenkins Received: 20 May 2007 / Accepted: 6 December 2007 / Published online: 8 January 2008 © Springer-Verlag 2008 Abstract Human control of high degree-of-freedom robotic systems can often be difficult due to a sparse control prob- lem. This problem relates to the disparity in the amount of information required by the robot’s control variables and the relatively small volume of information a human can specify. To address the sparse control problem, we propose the use of subspaces embedded within the pose space of a robotic sys- tem. Driven by human motion, our approach is to uncover 2D subspaces using dimension reduction techniques that allow cursor control, or eventually decoding of neural activity, to drive a robotic hand. We investigate the use of five dimen- sion reduction and manifold learning techniques to estimate a 2D subspace of hand poses for the purpose of generating motion. The use of shape descriptors for representing hand pose is additionally explored for dealing with occluded parts of the hand during data collection. We demonstrate the use of 2D subspaces of power and precision grasps for driving a physically simulated hand from 2D mouse input. Keywords Assistive robotics · Robot manipulation · Manifold learning · Motion capture 1 Introduction Developing human interfaces for controlling complex robotic systems, such as humanoid robots and mechanical prosthetic This work was supported in part by ONR Award N000140710141, ONR DURIP “Instrumentation for Modeling Dexterous Manipulation”, and NSF Award IIS-0534858. O. C. Jenkins (B ) Department of Computer Science, Brown University, 115 Waterman St., Providence, RI 02912-1910, USA e-mail: [email protected] arms, presents an underdetermined problem. Specifically, the amount of information a human can reasonably specify within a sufficiently small update interval is often far less than a robot’s degrees-of-freedom (DOFs). Consequently, basic control tasks for humans, such as reaching and grasp- ing, are often onerous for human teleoperators of robot sys- tems, requiring either a heavy cognitive burden or overly slow execution. Such teleoperation problems persist even for able-bodied human teleoperators given state-of-the-art sens- ing and actuation platforms. The problem of teleoperation becomes magnified for applications to assistive robotics and, more generally, control of devices by users with physical disabilities. In such applica- tions, feasible sensing technologies, such as electroencepha- logram (EEG) [18], electromyography (EMG) [4, 6, 12, 27], and cortical neural implants [8, 22], provide a limited channel for user input due to the sparsity of information as well as the noise contained in the sensed signals. Specifically for neu- ral decoding, efforts to produce control signals from neural activity have demonstrated success limited to 2-3 DOFs with bandwidth (estimated liberally) at around 10 bits/s [19]. With such limited bandwidth, control applications have focused on low-DOF systems, such as 2D cursor control [21], planar mobile robots [18], 4 DOF finger motion [1], and discrete control of 4 DOF robot arms [6, 11]. In scaling to higher-DOF systems, previous work could roughly be cast into two categories, shared control and supervised action selection from neural decoding. As described by Kim et al. [14], shared control can be seen as a behavior-based system [2] where the human provides high-level control and super- vision that is refined and executed by autonomous low-level motion control procedures. Alternatively, current action (or pose) selection techniques divide the space of user input for supervision into discrete decision regions, each mapped to a particular robot action or pose. Bitzer and van der Smagt [4], 123

Sparse control for high-DOF assistive robots

Embed Size (px)

Citation preview

Page 1: Sparse control for high-DOF assistive robots

Intel Serv Robotics (2008) 1:135–141DOI 10.1007/s11370-007-0013-0

SPECIAL ISSUE

Sparse control for high-DOF assistive robots

Odest Chadwicke Jenkins

Received: 20 May 2007 / Accepted: 6 December 2007 / Published online: 8 January 2008© Springer-Verlag 2008

Abstract Human control of high degree-of-freedom roboticsystems can often be difficult due to a sparse control prob-lem. This problem relates to the disparity in the amount ofinformation required by the robot’s control variables and therelatively small volume of information a human can specify.To address the sparse control problem, we propose the use ofsubspaces embedded within the pose space of a robotic sys-tem. Driven by human motion, our approach is to uncover 2Dsubspaces using dimension reduction techniques that allowcursor control, or eventually decoding of neural activity, todrive a robotic hand. We investigate the use of five dimen-sion reduction and manifold learning techniques to estimatea 2D subspace of hand poses for the purpose of generatingmotion. The use of shape descriptors for representing handpose is additionally explored for dealing with occluded partsof the hand during data collection. We demonstrate the useof 2D subspaces of power and precision grasps for driving aphysically simulated hand from 2D mouse input.

Keywords Assistive robotics · Robot manipulation ·Manifold learning · Motion capture

1 Introduction

Developing human interfaces for controlling complex roboticsystems, such as humanoid robots and mechanical prosthetic

This work was supported in part by ONR Award N000140710141,ONR DURIP “Instrumentation for Modeling DexterousManipulation”, and NSF Award IIS-0534858.

O. C. Jenkins (B)Department of Computer Science, Brown University,115 Waterman St., Providence, RI 02912-1910, USAe-mail: [email protected]

arms, presents an underdetermined problem. Specifically,the amount of information a human can reasonably specifywithin a sufficiently small update interval is often far lessthan a robot’s degrees-of-freedom (DOFs). Consequently,basic control tasks for humans, such as reaching and grasp-ing, are often onerous for human teleoperators of robot sys-tems, requiring either a heavy cognitive burden or overlyslow execution. Such teleoperation problems persist even forable-bodied human teleoperators given state-of-the-art sens-ing and actuation platforms.

The problem of teleoperation becomes magnified forapplications to assistive robotics and, more generally, controlof devices by users with physical disabilities. In such applica-tions, feasible sensing technologies, such as electroencepha-logram (EEG) [18], electromyography (EMG) [4,6,12,27],and cortical neural implants [8,22], provide a limited channelfor user input due to the sparsity of information as well as thenoise contained in the sensed signals. Specifically for neu-ral decoding, efforts to produce control signals from neuralactivity have demonstrated success limited to 2-3 DOFs withbandwidth (estimated liberally) at around 10 bits/s [19].

With such limited bandwidth, control applications havefocused on low-DOF systems, such as 2D cursor control [21],planar mobile robots [18], 4 DOF finger motion [1], anddiscrete control of 4 DOF robot arms [6,11]. In scaling tohigher-DOF systems, previous work could roughly be castinto two categories, shared control and supervised actionselection from neural decoding. As described by Kim et al.[14], shared control can be seen as a behavior-based system[2] where the human provides high-level control and super-vision that is refined and executed by autonomous low-levelmotion control procedures. Alternatively, current action (orpose) selection techniques divide the space of user input forsupervision into discrete decision regions, each mapped to aparticular robot action or pose. Bitzer and van der Smagt [4],

123

Page 2: Sparse control for high-DOF assistive robots

136 Intel Serv Robotics (2008) 1:135–141

using kernel-based classification techniques, have interac-tively controlled a high-DOF robot hand by classifying EMGdata into a discrete set of poses. Work by Hochberg et al.[11] takes a similar approach by associating manually-chosenregions on a screen with robot actions (e.g., grasp object) forcursor-driven robot control of a 4 DOF robot arm and gripper.It is clear that such manually-crafted mappings between userinput and robot pose spaces is a viable approach to supervisedaction selection. However, such mappings do not necessarilyallow for scaling towards a wide variety of robot behavior.For example, the majority of existing work focuses on spe-cific types power grasps for objects, which is a small fractionof the Cutkoski grasp taxonomy [7]. If users wish to performmore dexterous manipulation of objects, such as precisiongrasping of a fork or pencil, it is uncertain how they wouldselect such actions for a control mapping suited to powergrasping.

The question we pose is how to scale supervised actionselection mappings to increasingly diverse types of behav-ior, which we address through “unsupervised” dimensionreduction techniques.1 Robotic systems geared for generalfunctionality or a human anthropomorphism will have sig-nificantly more than 2–3 DOFs, posing a sparse controlproblem. For instance, a prosthetic arm and hand could havearound 30 DOF. While this mismatch in input and controldimensionality is problematic, it is clear that the space ofvalid human arm/hand poses does not fully span the spaceof DOFs. It is likely that plausible hand configurations existin a significantly lower dimensional subspace arising frombiomechanical redundancy and statistical studies of humanmovement [5,15,16,24]. In general, the uncovering intrin-sic dimensionality of this subspace is crucial for bridgingthe divide between decoded user input and the production ofrobot control commands.

In addressing the sparse control problem, our objective isto discover 2D subspaces of hand poses suitable for interac-tive control of a high-DOF robot hand, with the longer-termgoal sparse control with 2D cursor-based neural decodingsystems. We posit viable sparse control subspaces should bescalable (not specific to certain types of motion), consistent(two dissimilar poses are not proximal/close in the subspace),and continuity-preserving (poses near in sequence shouldremain proximal in the subspace). To uncover control sub-spaces, we follow a data-driven approach to this problemthrough the application of manifold learning (i.e., dimensionreduction) to hand motion data, and motion captured fromreal human subjects.

In this short paper, we investigate the use of various dimen-sion reduction techniques to estimate a 2D subspace of handposes for the purpose of sparse human control of robot hands.

1 “Unsupervised” in the context of machine learning refers to estimat-ing the outputs of an unknown function given only its inputs.

Our aim is to uncover a 2D parameterization that transformssparse user input trajectories into desired hand movements.Our current work focuses on data-driven methods for ana-lyzing power and precision grasps obtained through opticalmotion capture. In our initial and ongoing experimentaion,we apply five dimension reduction techniques for embed-ding this data in a 2D coordinate space: Principal Compo-nents Analysis (PCA) [10], Hessian LLE [9], Isomap [23],Windowed Isomap, and Spatio-temporal Isomap [13]. Theuse of shape descriptors for representing hand pose is addi-tionally explored for dealing with occluded parts of the handduring data collection. We demonstrate the use of uncov-ered 2D hand pose parameterizations of power and precisiongrasps to drive a physically simulated hand and 13 DOF robothand from 2D mouse input.

2 The sparse control problem

Our objective is to estimate a sparse control mapping s :Z → X that maps 2-dimensional user input z ∈ �d (d = 2)into the space of hand poses x ∈ �D . d and D are the num-ber of DOFs expressing human user input and robot handpose, respectively. Shown in Fig. 1, sparse control mappingsits within a feedback control loop of a human-robot team,serving as a control intermediary between a human decisionmaker and a robot platform. In this scenario, we assume anenvironment described at time t with state xt whose physicalintegration is described by a (not necessarily known) functionf . The robot is also assumed to have a motion control routinethat produces appropriate motor commands given a desiredstate xg

t (or goal robot pose) from a decision making routine.The human in the loop is responsible for making decisions toproduce xg

t , based on an internal policy π and intentions G,from their perception of state xt from partial observations y.The user interface is the medium through which the humanuser specifies the desired pose xg

t to the robot platform.The problem addressed in sparse control pertains to when

the human cannot or is overly burdened in specifying thedesired pose xg

t for the robot. Specifically, we assume num-ber of variables D >> d∗, where xg

t ∈ �D , z∗t ∈ �d∗, and

z∗t is the optimal amount of information a given that human

can communicate to a robot. Obviously, z∗t is a moving limit

that varies for each individual human user. However, givenmost people can function with a 2D mouse input device,we assume d∗ << 2 and seek to find a functional mappingxg

t = s(zt ) where zt ∈ �2. Further, efforts in neural decoding[11,20,22] have shown that disabled individuals physicallyincapable of moving 2D mouse can drive a 2D cursor fromneural activity.

Our estimation of the mapping s is founded uponthe assumption that the space of plausible hand poses isintrinsically parameterized by a low-dimensional manifold

123

Page 3: Sparse control for high-DOF assistive robots

Intel Serv Robotics (2008) 1:135–141 137

Fig. 1 Left A diagram of the feedback control consisting of a humandecision maker percieving robot state and generating action selectioncommands. Highlighted in blue, sparse control addresses the situationwhen the human has limited (sparse) communications with the robot,significantly less than robot’s DOFs. right Snapshot of a sparse controlsystem, from work by Tsoli and Jenkins [25], with the DLR robot handdriven by a human to grasp a water bottle

subspace. We assume each hand pose achieved by a humanis an example generated within this manifold subspace. It isgiven that the true manifold subspace of hand poses is likelyto have dimensionality greater than two. With an appropriatedimension reduction technique, however, we can preserveas much of the intrinsic variance as possible. As improve-ments in decoding occur, the dimensionality and fidelity ofthe input signal will increase but the same control mappinginfrastructure can leveraged without modification.

We create a control mapping by taking as input a set ofobserved training hand poses zi ∈ �D , embedding this datainto latent user control space coordinates zi ∈ �2, and gen-eralizing to new poses through out-of-sample projections.Although 2D locations can be manually assigned to handposes, as an approximation to the latent inverse mappingzi = s−1(xi ), we assume such procedures will lack scalabil-ity as new functionality becomes desired. In contrast, we takea data-driven approach using dimension reduction, namely

manifold learning tecniques. Dimension reduction estimatesthe latent coordinates z such that distances between data-pairs preserve some criteria of similarity. Each dimensionreduction method has a different notion of pairwise simi-larity and, thus, a unique view of the intrinsic structure ofthe data. Once embedded, the input-control pairs (xi , zi ) aregeneralized through interpolation to allow for new (out-of-sample) points [3] to be mapped between user input and posespaces.

2.1 Dimension reduction overview

Our method relies upon a suitable notion of pairwise similar-ity for performing dimension reduction. We explored the useof five dimension reduction methods for constructing controlmappings: Principal Components Analysis (PCA) [10], geo-desic Multidimensional Scaling (Isomap) [23], Hessian LLE(HLLE) [9], temporally-windowed Isomap (Windowed Iso-map), and Spatio-temporal Isomap (ST-Isomap) [13]. Thefirst method, PCA, constructs a linear transform of rank 2 Aabout the mean of the data µ:

z = s−1(x) = A(x − µ) (1)

PCA is equivalent to performing multidimensional scal-ing (MDS) [10] with pairwise Euclidean distance. MDS isa procedure that takes a full matrix M , where each elementMx,x ′ specifies the distance between a datapair, and producesan embedding that attempts to preserve distances between allpairs. MDS is an optimization that minimizes the “stress” Eof the pairwise distances:

E =√∑

x

∑x ′

(

√( f (x) − f (x ′))2) − Mx,x ′)2 (2)

The second method, Isomap, is MDS where shortest-pathdistances are contained in M as an approximation of geodesicdistance:

Mx,x ′ = minp

∑i

M ′(pi , pi+1) (3)

where M ′ is a sparse graph of local distances between nearestneighbors and p is a sequence of points through M ′ indi-cating the shortest path. Isomap performs nonlinear dimen-sion reduction by transforming the data such that geodesicdistance along the underlying manifold becomes Euclideandistance in the embedding. A canonical example of this trans-formation is the “Swiss roll” example, where input data gen-erated by 2D manifold is contorted into a roll in 3D. Givena sufficient density of samples, Isomap is able to flatten thisSwiss roll data into its original 2D structure, within an affinetransformation.

PCA and Isomap assume input data are unordered (inde-pendent identical) samples from the underlying manifold,

123

Page 4: Sparse control for high-DOF assistive robots

138 Intel Serv Robotics (2008) 1:135–141

Fig. 2 Top Snapshot of a handinstrumented with reflectivemarkers for optical motioncapture. Bottom Our physicallysimulated hand attempting toachieve the captured handconfiguration above

ignoring the temporal dependencies between successive datapoints. However, human hand motion has a sequential orderdue to the temporal nature of acting in the real world. Win-dowed Isomap accounts for these temporal dependencies ina straightforward manner by windowing the input data xi =xi · · · xi+w over a horizon of length w. This augmentationof the input data effectively computes local distances M ′between trajectories from each input point rather than indi-vidual data points.

ST-Isomap further augments the local distances betweenthe nearest neighbors of windowed Isomap. ST-Isomapreduces the distance between the local neighbors that arespatio-temporal correspondences, i.e., representative of thek best matching trajectories, to each data point. Distancesbetween a point and its spatio-temporal correspondences arereduced by some factor and propagated into global corre-spondences by the shortest-path computation. The resultingembedding attempts to register these correspondences intoproximity. In this respect, ST-Isomap is related to time-sere-ies registration using Dynamic Time Warping [17]. The dis-tinctions are that ST-Isomap does not require segmented orlabeled time-series and uses a shortest-path algorithm fordynamic programming.

All of these methods assume the structure of the manifoldsubspace is convex. Hessian LLE [9] avoids this convex-ity limitation by preserving local shape. Local shape aboutevery data point is defined by an estimate of the HessianHx,x ′ tangent to every point. Hx,x ′ specifies the similarity ofx to neighboring points x ′; non-neighboring points have noexplicit relationship to x . This sparse similarity matrix Hx,x ′is embedded in a manner similar to MDS, where global pair-wise similarity is implicit in the local neighborhood connec-tivity.

3 Initial results

The dimension reduction methods mentioned above wereapplied to human hand movement collected through

optical motion capture. The performer’s hand was instru-mented with 25 reflective markers, approximately 0.5 cm inwidth, as shown in Fig. 2. These markers were placed at criti-cal locations on the top of the hand: 4 markers for each digit,2 for the base of the hand, and 3 on the forearm. A Viconmotion capture system localized each of these markers overtime.

The human performer executed several power and pre-cision grips with different orientations of the hands. Powergrasps involved the human closing all their fingers and thumbaround a hypothetical cylindrical object with a diameter of1.5 in. Precision grasps consisted of the thumb making con-tact with each finger individually. The resulting dataset con-sisted of approximately 3,000 frames of 75 DOF data (25markers in 3D). Markers that dropout (occluded such thattheir location cannot be estimated) were assigned a defaultlocation during intervals of occlusion.

Embeddings produced by each of the dimension reductionmethods are shown in Fig. 3, with neighborhoods k = 5 forHLLE, k = 50 (Isomap), k = 10 (windowed Isomap), k = 3(ST-Isomap). Windowing occurred over a 10 frame horizon.ST-Isomap reduced distances by a factor of 10 between pointsthat are the k = 3 best local correspondences.

The quality of each embedding was assessed based on theproperties of consistency, embedding locations that mergedisparate hand configurations, and temporal continuity, pre-serving the proximity of poses that are sequentially adjacent.Inconsistency is an undesired artifact that is related to resid-ual variance (or stress) not accounted for in the embedding.The effect of residual variance differs across methods and isonly of consequence with respect to inconsistency. We wereunable to find a reasonable embedding for HLLE, suggest-ing the space of marker configurations does have continuouslocal shape. The PCA embedding best preserves the consis-tency among the precision grasps. However, this embeddinghas many temporal discontinuities due to marker dropout andinconsistencies for power grasps. The Isomap embeddingsuffers from the same discontinuities with fewer inconsis-tencies of the power grasp and greater inconsistencies of the

123

Page 5: Sparse control for high-DOF assistive robots

Intel Serv Robotics (2008) 1:135–141 139

−0.03−0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

0.05PCA embedding

−0.04 −0.03−0.02 −0.01 0 0.01 0.02 0.03 0.04−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04HLLE embedding

−1500 −1000 −500 0 500 1000 1500 2000−1000

−800

−600

−400

−200

0

200

400

600

800

1000Isomap embedding

−1500 −1000 −500 0 500 1000 1500 2000−1000

−800

−600

−400

−200

0

200

400

600

800

1000Windowed MDS embedding

−600 −400 −200 0 200 400 600 800−250

−200

−150

−100

−50

0

50

100

150ST−Isomap embedding

Fig. 3 Embedding of the hand marker positions using (in order) PCA , Hessian LLE, Isomap, windowed Isomap, and ST-Isomap. Power andprecision grasp configurations are shown in blue and red, respectively. A black line is drawn between temporally adjacent configurations

Fig. 4 A visualization controlling a power grasp for a physically simu-lated hand using 2D mouse control. A mouse trajectory in the windowedIsomap embedding (shown by the cursor position) is mapped into hand

configurations. This trajectory shows an initial idle configuration, pre-shaping, grasp closure, and rotation

precision grasp. The windowed Isomap embedding removesthe discontinuities of the Isomap embedding by effectivelyperforming a square wave low-pass filter on the input beforeshortest-path computation. The ST-Isomap embedding wasprone to overregistration of the data, which we attribute to alack of adaptivity in the number of correspondences acrosslocal neighborhoods.

The quality of each embedding with respect to sparserobot control was performed by driving a physically simu-lated robot hand. The simulated hand consisted of 27 DOFs:4 DOFs per finger, 7 thumb DOFs, and 6 wrist DOFs for posi-tion and orientation global coordinates. The kinematics of thehand were created manually using the marker positions froma single frame in the data set. Cylindrical geometries wereused to represent the hand. Our simulation engine, describedin [26], used the Open Dynamics Engine to integrate physics

over time. The desired configuration of the hand is set by thenearest neighbor in the 2D embedding of the current mouseposition. Motion control of the hand was performed usingour implementation of Zordan and van der Horst’s opticalmotion capture control procedure [28]. This motion control-ler takes a desired pose as the 3D locations of a set of uniquelyidentified markers that have corresponding locations on thehand. Springs of critical damping are attached between eachdesired marker location and corresponding location on thehand. The equilibrium of these springs brings the hand torest at the desired hand pose.

Shown in Fig. 4, a human user is able to move a cur-sor around the embedding to perform simple power graspingwith the Windowed Isomap embedding. Although we foundthe windowed Isomap embedding to be the most promising,none of these methods appeared to be directly applicable to

123

Page 6: Sparse control for high-DOF assistive robots

140 Intel Serv Robotics (2008) 1:135–141

Fig. 5 Embedding of the centralized moments for hand marker positions using (left-to-right) PCA, Isomap, and windowed Isomap. Power andprecision grasp configurations are shown in blue and red, respectively. A black line is drawn between temporally adjacent configurations

yield reasonable input spaces for sparse control. The problemnamely stems from how precision grasps are embedded into2D. Our intuition is that proper estimation of nearest neigh-bors is difficult to achieve. As a result, “short circuits” arecreated in the shortest path and cause inconsistency in theembedding. Consequently, control over precision graspsoften causes the system to be unstable due to violent changesin the hand pose desireds from a small mouse movementin certain areas of the embedding. As future work, we areexploring the use of graph denoising techniques, such asthrough belief propagation algorithms [25], to allow forembedding of a larger number of types of hand motion whilepreserving consistency and continuity.

Marker dropout from occluded parts of the hand was alsoa large factor in the embedding inconsistency and discontinu-ity. To account for marker dropout, statistical shape descrip-tors were used to recast the input space with features lesssensitive to specific marker positions. Centralized moments[10] up through the 16th order were computed for the non-occluded markers at every frame of the motion. The resultingdata set contained 3,000 data points with 4,096 dimensionsand was used as input into PCA, Isomap (k = 20), and win-dowed Isomap (k = 5). As shown in Fig. 5, all three of thesemethods produced embeddings with three articulations froma central concentrated mass of poses, have better consistencybetween power and precision grasps, and exhibit no signifi-cant discontinuities. From our preliminary control tasks, thePCA embedding provides the best spread across the 2D spacewith comparable levels of consistency to windowed Isomap.

4 Conclusion

We have presented the sparse control problem in human con-trol of high-DOF robotic and assistive devices. Viewing thehuman-robot collaboration, we consider the case where ahuman user cannot specify, through a user interface, the num-ber of variables needed to express a desired robot pose. We

have explored the use of dimension reduction for estimatinglow-dimenional 2D subspaces in such controlling high DOFcontrol scenarios. Our aim in this work has been to provide ameans for controlling high dimensional systems with sparse2D input by leveraging current decoding capabilities. Wehave taken a data-driven approach to this problem by usingdimension reduction to embed motion performed by humansinto 2D. Five dimension reduction methods were applied tomotion capture data of human performed power and pre-cision grasps. Centralized moments were utilized towardsembedding with greater consistency and utilization of theembedding space. Using our embeddings, successful controlof power grasps was achieved, but execution of precisiongrasping and other forms of hand movement require furtherstudy.

We estimate that greater exploration of neighborhoodgraph denoising and shape descriptors suitable for embed-ding hand poses offers an avenue for addressing these issuesfor functional sparse control for object manipulation.

Acknowledgments The author thanks Aggeliki Tsoli, MorganMcGuire, Michael Black, Greg Shaknarovich, and Mijail Serruya forsimulation, motion data assistance, and helpful discussions.

References

1. Afshar P, Matsuoka Y (2004) Neural-based control of a robotichand: evidence for distinct muscle strategies. In: IEEE interna-tional conference on robotics and automation)

2. Arkin RC (1998) Behavior-based robotics. MIT Press, Cambridge3. Bengio Y, Paiement JF, Vincent P (2003) Out-of-sample exten-

sions for lle, isomap, mds, eigenmaps, and spectral clustering. In:Advances in neural information processing systems, vol 16 (NIPS2003). Vancouver, British Columbia

4. Bitzer S, van der Smagt P (2006) Learning emg control of a robotichand: towards active prostheses. In: IEEE international conferenceon robotics and automation

5. Ciocarlie M, Goldfeder C, Allen P (2007) Dimensionality reduc-tion for hand-independent dexterous robotic grasping. In: IEEEinternational conference on intelligent robots and systems

123

Page 7: Sparse control for high-DOF assistive robots

Intel Serv Robotics (2008) 1:135–141 141

6. Crawford B, Miller K, Shenoy P, Rao R (2005) Real-time classifica-tion of electromyographic signals for robotic control. In: Nationalconference on artificial intelligence (AAAI 2006)

7. Cutkosky M (1989) On grasp choice, grasp models, and the designof hands for manufacturing tasks. IEEE Trans Robot Autom5(3):269–279

8. Donoghue J, Nurmikko A, Friehs G, Black M (2004) Developmentof neural motor prostheses for humans. Adv Clin Neurophysiol(Supplements to Clinical Neurophysiology) 57

9. Donoho D, Grimes C (2003) Hessian eigenmaps: locally linearembedding techniques for high-dimensional data. Proc. Nati AcadSci 100(10):5591–5596

10. Duda RO, Hart PE, Stork DG (2000) Pattern classification, 2nd ed.Wiley-Interscience

11. Hochberg L, Serruya M, Friehs G, Mukand J, Saleh M, Caplan A,Branner A, Chen D, Penn R, Donoghue J (2006) Neuronal ensem-ble control of prosthetic devices by a human with tetraplegia..Nature 442:164–171

12. Iberall T, Sukhatme GS, Beattie D, Bekey GA (1994) On the devel-opment of emg control for a prosthesis using a robotic hand. In:IEEE international conference on robotics and automation, SanDiego, pp 1753–1758

13. Jenkins OC, Mataric MJ (2004) A spatio-temporal extension toIsomap nonlinear dimension reduction. In: The international con-ference on machine learning (ICML 2004), Banff, pp 441–448

14. Kim H, Biggs S, Schloerb D, Carmena J, Lebedev M, NicolelisM, Srinivasan M (2006) Continuous shared control stabilizes reachand grasping with brain-machine interfaces. IEEE Trans BiomedEng 53(6):1164–1173

15. Lin J, Wu Y, Huang T (2000) Modeling the constraints of humanhand motion. In: IEEE workshop on human motion

16. Mason CR, Gomez JE, Ebner TJ (2001) Hand synergies duringreach-to-grasp. J Neurophysiol 86(6):2896–2910

17. Myers CS, Rabiner LR (1981) A comparative study of severaldynamic time-warping algorithms for connected word recognition..Bell System Tech J 60(7):1389–1409

18. del R. Millan J, Renkens F, Mourino J, Gerstner W (2004) Brain-actuated interaction. Artif Intell 159(1–2):241–259 doi: 10.1016/j.artint.2004.05.008

19. Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV (2006) Ahigh-performance brain–computer interface. Nature 442:195–198

20. Serruya M, Caplan A, Saleh M, Morris D, Donoghue J (2004)The braingate pilot trial: building and testing a novel direct neu-ral output for patients with severe motor impairment. In: Soc forNeurosci Abstr

21. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR,Donoghue JP (2002) Brain–machine interface: instant neural con-trol of a movement signal. Nature 416:141–142

22. Taylor D, Tillery SH, Schwartz A (2003) Information conveyedthrough brain control: Cursor versus robot. IEEE Trans NeuralSystems Rehab Eng 11(2):195–199

23. Tenenbaum JB, de Silva V, Langford JC (2000) A global geo-metric framework for nonlinear dimensionality reduction. Science290(5500):2319–2323

24. Todorov E, Ghahramani Z (2004) Analysis of the synergies under-lying complex hand manipulation. In: Intl conference of the IEEEengineering in medicine and biology society

25. Tsoli A, Jenkins OC (2007) 2d subspaces for user-driven robotgrasping. In: Robotics: science and systems - robot manipulation:sensing and adapting to the real world

26. Wrotek P, Jenkins O, McGuire M (2006) Dynamo: dynamic data-driven character control with adjustable balance. In: ACM SIG-GRAPH video game symposium, Boston

27. Zecca M, Micera S, Carrozza MC, Dario P (2002) Control of mul-tifunctional prosthetic hands by processing the electromyographicsignal. Crit Rev Biomed Eng 30(4–6):459–485

28. Zordan VB, Horst NCVD (2003) Mapping optical motion capturedata to skeletal motion using a physical model. In: SCA ’03: pro-ceedings of the 2003 ACM SIGGRAPH/eurographics symposiumon computer animation. Eurographics Association, Aire-la-Ville,pp 245–250

123