11
Factors affecting motion integration Gunter Loffler and Harry S. Orbach Department of Vision Sciences, Glasgow Caledonian University, Cowcaddens Road, Glasgow G4 0BA, UK Received May 14, 2002; revised manuscript received August 7, 2002; accepted March 18, 2003 The perceived direction of motion of a featureless contour inside a circular aperture is always perpendicular to the contour’s orientation, regardless of its true motion (the aperture problem). This study investigates the circumstances under which unambiguous feature motion (of line terminators, single dots, or truncations of a D6 pattern) in adjacent apertures can alter the perceived direction of such featureless contours. We find that integration mechanisms responsible for motion capture are fairly robust against misorientations and contrast manipulations of individual components, are sensitive to differences in spatial frequencies, and scale with pat- tern size. Motion capture is not diminished when a D6 profile is substituted for the square-pulse profile of a line and is independent of the visibility of the apertures, indicating that object interpretations and three- dimensional analyses of a scene are less important than has been postulated previously. These results have strong implications for the neuronal hardware underlying the integration of motion signals across space and provide a framework for global motion models. © 2003 Optical Society of America OCIS codes: 330.4150, 330.5380, 330.7310. 1. INTRODUCTION Neurons in the early stages of motion processing have spatially restricted receptive fields. Their locally ob- tained motion measurements will often be at variance with the true motion of an object. Hence the veridical perception of object motion requires the integration of such local signals over space. It is therefore not surprising that substantial effort has been focused on elucidating the way in which the visual system computes the rigid 1 motion of extended objects. An early proposal that the visual system may implement a mathematical rule combining two or more ambiguous signals (the intersection of constraints, or IOC rule 2 ) has been shown to be too simplistic. There are many occa- sions when the IOC rule does not match human percep- tion inside single apertures 35 or across spatially sepa- rated parts of an object. 68 An alternative approach bases the determination of ob- ject motion on features such as corners, terminators, and dots. Wallach 9 was the first to note that only the velocity estimates obtained at these particular points of an object match the physical direction of the object’s motion. He proposed that these feature signals would subsequently be employed by the visual system to disambiguate infor- mation obtained at other sites. There is abundant psychophysical evidence for the sub- stantial role that such features play in the process of com- puting global object motion. The presence of features has been shown in many circumstances to determine motion perception. 6,913 Other studies revealed that the salience of features is critical for motion capture to occur. Dimin- ishing salience by reducing contrast, decreasing presenta- tion time, or adding noise abolishes motion capture. 14,15 The last two studies, while pointing toward the critical role that features play, were concerned with single, con- tinuous objects and could not therefore address a number of fundamental issues on how features constrain ambigu- ous signals. These issues can be addressed by presenting the stimuli in spatially separated apertures, where the motion of a line segment can be varied independently from that of the features. By doing so, one can assess the relative strength of feature signals compared with that of contours, whether or not motion integration relies on in- formation from within a single spatial-frequency channel, and what role figural interpretation or a three- dimensional analysis of a scene plays in this process. 2. GENERAL METHODS The stimulus configuration is indicated in Fig. 1. Stimuli were drawn on a midgray background (105 cd/m 2 ) and a fixation point was always present at the center of the screen. The stimulus was presented in three spatially separated circular apertures. Such an arrangement could equally well be described as a single object moving behind a mask with three apertures or as three separate and independently moving objects. The luminance and color of the virtual mask was identical to the background of the pattern to simplify the theoretical analysis. Thus the aperture borders were invisible (indicated by dashed lines). The central aperture always contained a one- dimensional stimulus (e.g., a line segment). Such a stimulus inside a circular aperture is ambiguous in the sense that only the motion component perpendicular to its orientation can be retrieved, while the component along its orientation cannot (the aperture problem). De- spite its indeterminate nature, perceptually such a fea- tureless, one-dimensional stimulus in isolation is always perceived as a rigid object translating perpendicular to its orientation. In contrast to the featureless pattern seen ‘‘behind’’ the central aperture, the peripheral apertures displayed unambiguous motion of two-dimensional object features: line terminators, truncations of sixth spatial derivatives of Gaussians (D6s), or single dots. The pe- ripheral apertures were centered along the axis of the ori- entation of the central contour. All three apertures had G. Loffler and H. S. Orbach Vol. 20, No. 8/August 2003/J. Opt. Soc. Am. A 1461 1084-7529/2003/081461-11$15.00 © 2003 Optical Society of America

Factors affecting motion integration

  • Upload
    harry-s

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

G. Loffler and H. S. Orbach Vol. 20, No. 8 /August 2003 /J. Opt. Soc. Am. A 1461

Factors affecting motion integration

Gunter Loffler and Harry S. Orbach

Department of Vision Sciences, Glasgow Caledonian University, Cowcaddens Road, Glasgow G4 0BA, UK

Received May 14, 2002; revised manuscript received August 7, 2002; accepted March 18, 2003

The perceived direction of motion of a featureless contour inside a circular aperture is always perpendicular tothe contour’s orientation, regardless of its true motion (the aperture problem). This study investigates thecircumstances under which unambiguous feature motion (of line terminators, single dots, or truncations of aD6 pattern) in adjacent apertures can alter the perceived direction of such featureless contours. We find thatintegration mechanisms responsible for motion capture are fairly robust against misorientations and contrastmanipulations of individual components, are sensitive to differences in spatial frequencies, and scale with pat-tern size. Motion capture is not diminished when a D6 profile is substituted for the square-pulse profile of aline and is independent of the visibility of the apertures, indicating that object interpretations and three-dimensional analyses of a scene are less important than has been postulated previously. These results havestrong implications for the neuronal hardware underlying the integration of motion signals across space andprovide a framework for global motion models. © 2003 Optical Society of America

OCIS codes: 330.4150, 330.5380, 330.7310.

1. INTRODUCTIONNeurons in the early stages of motion processing havespatially restricted receptive fields. Their locally ob-tained motion measurements will often be at variancewith the true motion of an object. Hence the veridicalperception of object motion requires the integration ofsuch local signals over space.

It is therefore not surprising that substantial effort hasbeen focused on elucidating the way in which the visualsystem computes the rigid1 motion of extended objects.An early proposal that the visual system may implementa mathematical rule combining two or more ambiguoussignals (the intersection of constraints, or IOC rule2) hasbeen shown to be too simplistic. There are many occa-sions when the IOC rule does not match human percep-tion inside single apertures3–5 or across spatially sepa-rated parts of an object.6–8

An alternative approach bases the determination of ob-ject motion on features such as corners, terminators, anddots. Wallach9 was the first to note that only the velocityestimates obtained at these particular points of an objectmatch the physical direction of the object’s motion. Heproposed that these feature signals would subsequentlybe employed by the visual system to disambiguate infor-mation obtained at other sites.

There is abundant psychophysical evidence for the sub-stantial role that such features play in the process of com-puting global object motion. The presence of features hasbeen shown in many circumstances to determine motionperception.6,9–13 Other studies revealed that the salienceof features is critical for motion capture to occur. Dimin-ishing salience by reducing contrast, decreasing presenta-tion time, or adding noise abolishes motion capture.14,15

The last two studies, while pointing toward the criticalrole that features play, were concerned with single, con-tinuous objects and could not therefore address a numberof fundamental issues on how features constrain ambigu-ous signals. These issues can be addressed by presenting

1084-7529/2003/081461-11$15.00 ©

the stimuli in spatially separated apertures, where themotion of a line segment can be varied independentlyfrom that of the features. By doing so, one can assess therelative strength of feature signals compared with that ofcontours, whether or not motion integration relies on in-formation from within a single spatial-frequency channel,and what role figural interpretation or a three-dimensional analysis of a scene plays in this process.

2. GENERAL METHODSThe stimulus configuration is indicated in Fig. 1. Stimuliwere drawn on a midgray background (105 cd/m2) and afixation point was always present at the center of thescreen. The stimulus was presented in three spatiallyseparated circular apertures. Such an arrangementcould equally well be described as a single object movingbehind a mask with three apertures or as three separateand independently moving objects. The luminance andcolor of the virtual mask was identical to the backgroundof the pattern to simplify the theoretical analysis. Thusthe aperture borders were invisible (indicated by dashedlines).

The central aperture always contained a one-dimensional stimulus (e.g., a line segment). Such astimulus inside a circular aperture is ambiguous in thesense that only the motion component perpendicular toits orientation can be retrieved, while the componentalong its orientation cannot (the aperture problem). De-spite its indeterminate nature, perceptually such a fea-tureless, one-dimensional stimulus in isolation is alwaysperceived as a rigid object translating perpendicular to itsorientation. In contrast to the featureless pattern seen‘‘behind’’ the central aperture, the peripheral aperturesdisplayed unambiguous motion of two-dimensional objectfeatures: line terminators, truncations of sixth spatialderivatives of Gaussians (D6s), or single dots. The pe-ripheral apertures were centered along the axis of the ori-entation of the central contour. All three apertures had

2003 Optical Society of America

1462 J. Opt. Soc. Am. A/Vol. 20, No. 8 /August 2003 G. Loffler and H. S. Orbach

diameters of 1.6° and were shifted by 0.93° perpendicularto the line’s orientation away from the fixation point.

The independent variable was the distance between theapertures (the gap). The three translating stimuli al-ways moved in synchrony (identical speed and direction ofmotion) away from the fixation point. Given the fixed po-sition of the fixation point, this resulted in different posi-tions of the apertures for up versus down trials and aclock versus counterclockwise orientation of the line.The gap between apertures varied from 0.4° to 4.5° with,typically, five different gap distances: 0.4°, 1.2°, 2.0°,3.0°, and 4.5°.

Stimuli were presented on a monochrome PhillipsBrightview monitor controlled by an Apple Macintosh7500 computer. The frame refresh rate of the monitorwas set to 66.7 Hz and the spatial resolution to 6403 480 pixels (19.2 pixels cm21). Chin and foreheadrests were used to maintain the constant viewing distanceof 80 cm (with the exception of the experiment 2, whichwas conducted at 240 cm). At 80 cm, the vertical andhorizontal interpixel separations subtended 0.037°.Viewing was always binocular.

A. Stimuli

1. LinesThe contrast cross-section profiles of the lines could not bestrict square-pulse functions. The pixelated nature ofthe monitor display would produce a jagged appearancefor orientations away from the cardinal axes. To avoidsuch pixelation artefacts, exponential functions were usedto smooth the edges and terminations of the line (anti-aliasing). For example, the contrast profile of a verticalline is given by

f~x, y ! 5 C exp@2~x/sx!Nx#exp@2~ y/sy!Ny#, (1)

where C is the pattern contrast. The space constants,sx (0.125°) and sy (2.42°), were chosen to give the de-sired line width and length (see below), and the exponents

Fig. 1. General methods. The diagram shows a 45° orientedline (width 5 0.25°) behind a three-aperture mask (dashedlines indicate invisible 1.6°-diameter apertures). The two outerapertures include the line terminators. The permanent fixationpoint was 0.93° away from the center of the central aperture,which always contained a featureless contour (here a line seg-ment). Two absolute directions of motion (up and down) and fivedifferent inter-aperture gaps were employed in the experiments.

Nx (8) and Ny (240) were chosen to give an equallysmooth appearance for line edges and tips.

2. Truncated D6 PatternsThe truncated D6 patterns were identical to the lines inall but one respect: the ‘‘squarish’’ profile of a line wasreplaced by a contrast cross-section profile given by thesixth spatial derivative of a Gaussian,16 mathematicallydefined as

D6~x ! 51

15 F15 2 90S x

sD 2

1 60S x

sD 4

2 8S x

sD 6G

3 expS 2x2

s 2D . (2)

A D6 profile is band limited in spatial frequency and has afull spatial-frequency bandwidth of one octave at half-amplitude. The peak spatial frequency of a D6 can becalculated as v 5 )/ps and was set to 1.7 cycles per de-gree (cpd) ( s 5 0.32).

3. Single DotsThe luminance profile of the dot stimuli was given by thecircularly symmetric difference of two Gaussians (DOG):

DOG~x, y ! 5 CH exp@2~x2 1 y2!/s12#

2 S s1

s2D 2

exp@2~x2 1 y2!/s22#J . (3)

Such a DOG integrates to zero over space, resulting inzero average luminance intensity. The space constantswere s1 5 0.15° and s2 5 0.225°.

4. Stimulus ParametersThe contrast, C, of each of the patterns was set to nega-tive 97% (black on a gray background). The space con-stants for dots, lines, and D6s were chosen to give similarspatial extents. Thus the diameter of the central lobe ofthe DOG was ;0.36°, the width of the D6 central lobe was0.28°, and the width of the lines (defined by full width athalf-height) was 0.25°.

B. ProcedureA uniform midgray background appeared together withthe fixation point at the center of the screen before eachtrial. Subjects initiated each trial by pressing the mousebutton, which was followed by a 300-ms time delay beforethe stimulus appeared. Subjects were asked to indicatethe perceived direction of motion by using the method ofadjustment. Two black dots appeared on the screen aftereach presentation. One dot was fixed at the center of thescreen. The other dot appeared at a randomly deter-mined position on the circumference of a 3.7° radiuscircle. By moving the mouse, observers could adjust theposition of the second dot so that the orientation given bythe two dots was parallel to the perceived direction of mo-tion. A mouse click recorded each judgment.

The subject’s task was to report the direction of motionfor the part of the stimulus moving inside the central ap-erture (see Fig. 1). Short presentation times were used,

G. Loffler and H. S. Orbach Vol. 20, No. 8 /August 2003 /J. Opt. Soc. Am. A 1463

and the fixation point was displayed throughout the du-ration of the experiment to minimize the effects of eyemovements. Presentation time was 195 ms.

In a previous study17 we found perception for the direc-tion of motion of translating lines to exhibit anisotropicbehavior with maximum and minimum biases for obliqueand cardinal axis motions, respectively. To avoid the po-tential influence of such anisotropic biases, we employedthe two physical directions of motion, which were per-ceived veridically in the previous study: along the verti-cal meridian, upward (90°) and downward (270°). (In ourconvention, 0° equals a rightward direction, and positiveangles indicate clockwise rotations.)

Six line tilts (the angle between the direction of motionand the orientation of the stimuli), of 630°, 645°, and660° were randomly presented within an experimentalsession. (According to our convention, a negative 45° tiltand a 90° direction of motion represents an obliquely ori-ented line moving up; see Fig. 1.) This results in a totalof twelve different conditions (six tilts and two absolutedirections of motion) for each inter-aperture gap, whichwere repeated eight times within each session. Thesedata were collapsed because performance was indistin-guishable for upward versus downward motion and forclockwise versus counterclockwise line tilt. Thus therewere a total of 32 repetitions for each absolute line tiltand gap size.

Stimuli always moved at 5°/s. Three subjects withcorrected-to-normal vision (visual acuity of 6/6 or better)participated in the experiments. Two were naıve with re-spect to the purpose of the study. Feedback was not pro-vided.

3. EXPERIMENTSThe experiments were designed to elucidate the circum-stances under which the presence of a feature can causethe perception of contour motion to shift from its ‘‘normal’’direction. The aim of the first set of experiments was todetermine the properties of the mechanisms of motion in-tegration. The second set of experiments investigatedthe effects of higher-level influences such as object orscene interpretation on this process.

A. Part I: Properties of the Motion IntegrationMechanisms

1. Experiment 1: Effect of the Gap between AperturesThe first experiment aimed to quantify the spatial extentover which a line terminator can capture one-dimensionalcontour motion. The graphs in Fig. 2 plot perceived di-rection of motion as a function of the inter-aperture gapfor three tilts: 45° (Fig. 2A), 30° (Fig. 2B), and 60° (Fig.2C).

Perceived motion on the ordinate is plotted relative tothe true, physical direction of motion.18 Consequently,any point falling along the y 5 0 line represents a veridi-cal perception. On the other hand, an isolated line seg-ment (such as the one inside the central circular aper-ture) is always perceived as moving orthogonal to itsorientation. In the case of a 45° tilt, as depicted by theicon on the right of Fig. 2A, y 5 145°.

All three observers showed the same pattern whenasked to report the perceived direction of motion for theline segment shown in the central aperture: The veridi-cal direction of motion was perceived when the terminatorapertures were close to the central segment (gap5 0.4°), perception was reported to be approximatelyperpendicular for the largest gap (4.5°), and intermediatedirections were reported for gap sizes between these ex-tremes. Because the direction of motion for a single, iso-lated contour was always perpendicular to its orientation,any direction different from the perpendicular indicatesthe influence of the terminators in the peripheral aper-tures. To permit quantitative comparisons between dif-ferent conditions, we fitted the averaged performancewith a Gaussian shown in each graph by a solid curve(with space constant, amplitude, and asymptote as freeparameters).

The other tilts, 30° (Fig. 2B) and 60° (Fig. 2C), yieldedthe same pattern of responses. Note that owing to theirdifferent orientations, the perpendicular perceptions are160° and 130°, respectively.

Similar results on motion capture of a one-dimensionalsegment by nearby object features have been obtained byBen-Av and Shiffrar10,19 despite the fact that their experi-mental design differed substantially from the one em-ployed here. Ben-Av and Shiffrar outlined individual ap-ertures and in addition provided binocular depth cuesconsistent with an object partly covered by an occluder.On the basis of previous experiments,20 they assumedthat such cues were required for motion capture to occur.Contrary to this assumption, our results indicate that nei-ther monocular nor binocular depth cues are necessary.This has important implications with respect to the influ-ence on motion integration of postulated higher-levelmechanisms such as a three-dimensional analysis of ascene.

2. Experiment 2: Scale VarianceThe motivation for the second experiment was to deter-mine whether lateral interactions are effective over thesame inter-aperture gaps when the stimulus size ischanged. Understanding whether integration is scale in-variant or not allows one to draw conclusions about theunderlying neuronal hardware responsible for motioncapture across space.

To investigate scaling we replicated the first experi-ment at an increased viewing distance. The results forthe two viewing distances (80 cm and 240 cm), whichscale the stimulus size by a factor of 3, are virtually in-distinguishable when plotted on correspondingly scaledabscissa axes (Figs. 2 and 3). As the scales differ by thisfactor, one can conclude that integration lengths do notremain constant but scale with image magnification. Acomparison of the space constants of the Gaussians thatwere fitted to the average data shows linear scaling.Space constants are 2.1° and 0.7° for viewing distances of240 cm and 80 cm, respectively.

3. Experiment 3: Terminator ContrastThe influence of the contrast of terminators on motiondisambiguation has been studied previously.15 In thisearlier study, motion integration was measured for con-

1464 J. Opt. Soc. Am. A/Vol. 20, No. 8 /August 2003 G. Loffler and H. S. Orbach

Fig. 2. Perceived direction of motion for the line segment inside the central aperture as a function of inter-aperture gap for lines with45° (A), 30° (B), and 60° (C) tilts. The arrows in the icons to the right of the graphs depict the perceived direction of motion of the linesegments. Error bars are standard errors of the mean (N 5 32). The bold solid curves represent averaged responses. Regardless ofline tilt, observer perception for small gaps was determined by the veridical signal from the terminators in the outer apertures and shiftstoward the line’s orthogonal when gap size was increased.

tinuous lines of varying contrast. Observers required ap-proximately 50% contrast to correctly judge the differencebetween the real direction of motion (provided by the ter-minators) and the perpendicular motion (determined bythe line’s orientation). This is a surprisingly high valuecompared with that for pattern vision, where human ob-servers exhibit contrast thresholds as low as 0.5%.Lorenceau et al.15 could not directly isolate the relativestrength of the terminator contribution by changing thecontrast of the whole line. Reducing the contrast of theline, of course, affects the line segment (perpendicular)and the terminator (veridical) signals simultaneously.One way to assess the relative strengths of these two con-tributions is to simply reduce the contrast of the termi-nated segments in a three-aperture experiment, whilekeeping the center segment contrast high.

Comparison of the average data shown in Fig. 4 for50% contrast and the results from the first experiment(Fig. 2) for 100% contrast terminators reveals no signifi-

cant difference. This indicates that the signal from ter-minators at 50% contrast is strong enough to capture a100% contrast line segment. However, reducing termi-nator contrast to 20% results in a diminished range ofcapture; further contrast reductions result in an almostcomplete loss of effective interactions. When terminatorsare at 5% contrast, the perceived direction of motion, evenfor the smallest aperture gaps, is shifted to within 10° ofthe perpendicular, one-dimensional line segment signal.Thus terminator signals do not have any significant influ-ence on the line segments’ direction of motion if their con-trast is low. This lack of terminator impact at low con-trasts cannot be explained in terms of their reducedvisibility, because at 5% contrast, subjects could see themclearly. These results are in qualitative agreement withthose for continuous line stimuli inside single apertures15

but also permit estimation of the strength of a terminatorsignal relative to that of a line segment.

There is an additional interesting issue (pointed out by

G. Loffler and H. S. Orbach Vol. 20, No. 8 /August 2003 /J. Opt. Soc. Am. A 1465

one reviewer). Is it the absolute contrast of the termina-tors that matters or is it the contrast similarity of the cen-tral patch and the terminators? To address this ques-tion, we repeated the experiment but lowered the contrastof the central segment from 100% to 20%.

Comparison of Fig. 5 with Fig. 2A shows that 100%-contrast terminators exert an even stronger capturing ef-fect on a 20%-contrast segment than on a 100%-contrastsegment. Moreover, capture between high-contrast ter-minators and low-contrast segments is stronger than be-tween three contrast-matching (20%) stimuli. Hence it isindeed the absolute contrast and not the contrast similar-ity that is the crucial factor for this kind of motion inte-gration. It is interesting to note that terminators of eventhe lowest contrast (5%) can capture low-contrast seg-ments (20%; Fig. 5). The same terminators were unsuc-cessful at capturing a high-contrast (100%) segment evenat our narrowest separation of 0.4° (Fig. 4). All these re-sults are consistent with mechanisms that are drivenmainly by absolute contrast, and they provide compellingevidence for low-level interactions (as discussed below).

Fig. 3. Effects of scaling the stimulus by a factor of 1/3 by pre-senting the stimuli at a threefold increased viewing distance (240cm). Consequently, the angle subtended by the widths of thelines (0.083°) as well as stimulus speed (1.67°/s), the aperture di-ameters (0.53°), and the gaps (0.13°, 0.4°, 0.67°, 1°, 1.5°) allchanged. Note the similarity between the resulting curves forthe data here and for experiment 1 (using appropriately scaledaxes), indicating linear scaling.

Fig. 4. Effect of terminator contrasts on high-contrast (97%)central segments. The data are averages across the three ob-servers for terminator contrasts of 50%, 20%, and 5%. Errorbars here represent inter-subject variability expressed as thestandard error of the mean. While capture appears to be unaf-fected when terminator contrast is decreased to 50%, it is notice-ably weakened for intermediate (20%) and completely absent forlow (5%) feature contrast.

B. Part II: Higher-Order Influences on MotionIntegrationThe second set of experiments address potential higher-order influences such as object or scene interpretation onmotion integration. In the experiments discussed so far,the interpretation of a very simple object—a singlestraight line occluded by an invisible aperture mask—wasalways possible. The aim of the following experimentswas to manipulate the stimulus so that the interpretationof a single object became increasingly harder. Alteringthe relative orientations between the central line segmentand the terminators (experiment 4), employing stimulithat do not allow a straightforward interpretation of asingle stimulus (experiment 5), and replacing line termi-nators with dots (experiment 6), impairs simple global ob-ject interpretations.

1. Experiment 4: Skewing Relative OrientationThe first of these experiments was designed to evaluatethe importance of configuration properties such as col-linearity. Computational,21 psychophysical,19 andneurophysiological22 studies provided evidence for mecha-nisms that are not purely distance dependent. Instead,they suggest that the organization of individual contoursrelative to each other, such as their collinearity, deter-mines motion integration strength.

All these studies investigated the effects of collinearityby laterally displacing contours of the same orientation,but such experiments inevitably alter the relative posi-tion of the contours to each other. The method used hereaimed to determine differences in integration for contoursdisplayed at the same positions in order to evaluate inte-gration strengths between the same neuronal sites. Tokeep the absolute positions fixed and simultaneously testfor figural organization, the relative orientations of thethree parts of the line were varied independently.

Collinearity models such as that of Grossberg andMingolla21 suggest that neurons with particular orienta-tional preferences exhibit stronger interconnections withneighbors that fall along their axis of preference. In sup-

Fig. 5. Effect of terminator contrasts on low-contrast (20%) cen-tral segments. The data are for terminator contrasts of 100%,20%, and 5%. Capture between high-contrast (100%) termina-tors and low-contrast (20%) segments is stronger than betweenthree contrast-matching (20%) stimuli. Unlike the results of thecondition of a high-contrast central segment portrayed in Fig. 4,here even the lowest-contrast (5%) terminators are able to cap-ture a midcontrast line segment.

1466 J. Opt. Soc. Am. A/Vol. 20, No. 8 /August 2003 G. Loffler and H. S. Orbach

port of this, Ben-Av and Shiffrar19 did not find any motionintegration taking place for noncollinear, displaced con-tours. Is capture equally impaired when contours aremisoriented?

The inset in Fig. 6 shows a diagram of the stimulus.The orientation of the central line segment was always45°. The terminators were oriented at 40°, 35°, 25°, or 0°from the horizontal to introduce misorientations (skews)of 5°, 10°, 20°, and 45°, respectively. Here, as in all otherexperiments, the location of the end point of the line ter-minator was centered within the outer apertures halfwaythrough the presentation regardless of their orientation(and hence the skew). Therefore the terminator endpoints always fell on the axis of orientation of the centralline segment. Such a configuration is referred to as col-linear to reflect this fact.

It is evident that misorientations of up to 10° did notdramatically alter judgment of the direction of motion.Motion capture was observed for small aperture gapswith the usual shift of perception toward the perpendicu-lar as gap size increased. A 20° difference between thecentral line segment and the terminators produced a clear

Fig. 6. Effect of skewing terminator orientation. The graphshows the subject-averaged data for four skews (5°, solid circles;10°, open circles; 20°, solid squares; and 45°, open squares).Subjects tolerate skews of up to 20° before integration is signifi-cantly affected.

Fig. 7. Effect of object interpretation on motion integration.The typical moving-line stimulus is replaced by a truncatedstimulus exhibiting the profile of a D6 (peak spatial frequency5 1.7 cpd), where the central segment is contrast reversed withrespect to the flanking segments. The subject-averaged data arevery close to those obtained with lines (Fig. 2) regardless ofstimulus tilt (30° and 60° tilt data are not shown).

impairment in integration over the entire range of aper-ture separations. Even then, subjects reported a direc-tion biased toward the signal from the terminators. Sur-prisingly, given the results of misaligned lines in previousstudies, a skew as large as 45° was needed to abolish theinfluence of the translating terminators on the centralline.

The results suggest that lateral interactions betweenspatially isolated sites may effectively operate if seg-mented stimuli share similar orientation and direction,possibly up to 620°, but are ineffective if orientations dif-fer by 45° or more.

The results obtained here are not a consequence of anydifficulty in the judgment of alignment versus misalign-ment. One can easily discriminate when the three dis-connected parts of the line are aligned—or skewed by 10°.Hence the visual system apparently integrates signalswhether they belong to aligned or somewhat misorientedsegments. From a computational point of view, this iscertainly a desirable feature. Most natural environ-ments have few, if any, perfectly straight contours. Ad-jacent but slightly misoriented parts are likely to belongto a single object and should therefore be combined.These results provide a direct quantitative measure ofhow much misorientation is tolerated in this integrationprocess. Moreover, complementing the results of previ-ous experiments10,12,19 that showed that collinearity be-tween individual parts matters for motion integration, weprovide the further insight that there is an important dif-ference between parallel but noncollinear lines on the onehand and collinear but nonparallel lines on the other.Capture can break down as a consequence of misorienta-tion between stimuli even in circumstances where partsare collinear and gaps are small.

2. Experiment 5: Truncated D6 PatternsA second issue, related to the role of object interpretation,concerns the question of whether motion integration re-quires the observer to perceive a uniform, continuous ob-ject. In real environments, where there may well be oc-clusions, it would be advantageous for motion integrationto be facilitated if a consistent object interpretation werepossible and inhibited in the absence of a consistent objectinterpretation.

To address this point, we changed the cross section ofthe moving stimulus from a square pulse to a D6. Theappearance of a continuous object behind three aperturesis then avoided when different contrast polarities areused for individual parts of the stimulus.

Data averaged across subjects for this condition (Fig. 7)are close23 to those obtained for lines in experiment 1(Fig. 2), suggesting that motion integration is not greatlyaffected whether or not we interpret the stimulus as aris-ing from a continuous, occluded object. This challengesexplanations for motion integration based on the figuralinterpretation of a scene. Rather, it implies that motionintegration occurs even in circumstances where differentparts of a scene share a common attribute (similar orien-tation) but lack a straightforward interpretation of asingle underlying object. In turn, this indicates that mo-

G. Loffler and H. S. Orbach Vol. 20, No. 8 /August 2003 /J. Opt. Soc. Am. A 1467

tion integration might be successfully implemented by alow-level process that does not utilize high-level, cognitiveinterpretations.24

3. Experiment 6: Effect of Dots on Line SegmentsDoes the statement at the end of the preceding sectionalso apply in a more general case? Suppose that thestimuli in separate apertures are markedly different withrespect to their form and profile and share only commonspatial-frequency components. Can we make a stimuluswith flanking elements that excite the same cells as ter-minated lines but where capture is not observed?

In this experiment, we kept the central line segment asin experiments 1–4 but replaced the terminators withdots (circular DOGs) that had a peak spatial frequencyequal to the width of the line. For each gap size the cen-ter of the dots appeared at the same absolute position asthe actual terminations of the lines in the first experi-ment.

Clearly, the unambiguously translating dots do not cap-ture the central line segment (Fig. 8). The perceived di-rection was always close to orthogonal to the orientationof the line segment.25 This seems to argue against thefixed, low-level mechanism for motion integration sug-gested by experiments 4 and 5.

On the surface, the results of the last three experi-ments appear to support opposite conclusions. On theone hand, the skew and contrast-reversed D6 pattern re-sults argue in favor of low-level mechanisms independentof object interpretation. On the other hand, the lack ofcapture in the dot experiment could be interpreted as theresult of a cognitive, higher-level influence. However,there is a plausible explanation for the apparent paradox.Lateral interactions that are based on collinear facilita-tion could also explain why line terminators capture linesegments but dots do not. This point will be consideredfurther in Section 4.

4. Experiment 7: Varying the Terminator SpatialFrequencyIt is well established that visual inputs are initially ana-lyzed in parallel by filters selectively sensitive to a pre-

Fig. 8. Influence of dots on a line segment. Results demon-strate the lack of strong interactions between such dissimilarstimuli (with one subject actually showing, for this 45° case, aweak repulsion from the physical direction of motion). This ob-servation is independent of the tilt of the line segment (30° and60° tilts are not shown).

ferred orientation and spatial frequency of the stimulus(see e.g., Ref. 26). The final experiment aimed to gain in-sight into whether and how different spatial-frequencychannels contribute to motion capture. Interactionscould be restricted within a small band of similar spatialfrequencies or spread across widely different spatial-frequency channels. When a truncated stimulus withthe profile of a line is used, it is impossible to distinguishbetween these two options because both the line termina-tor and the line profile exhibit broad spatial-frequencyranges. How can one distinguish between these alterna-tives?

Instead of the square-pulse profile of a line, derivativesof Gaussian profiles may be used. Such stimuli are com-monly employed to isolate small spatial-frequency bands.However, as in the case of the termination of a line, thetruncation of a D6 pattern exhibits a broad spatial-frequency spectrum. Although truncating the stimuluswill create a broadband spatial-frequency distribution,changing the spatial frequency of the truncated stimuluswill cause this distribution to shift. Hence, if motion in-tegration is strongly dependent on interactions within thesame spatial-frequency band, effects such as motion cap-ture might be impaired when the frequency of the trun-cated parts is altered while the center is kept unchanged.

This is exactly what we found (Fig. 9). The results in-dicate clearly that altering the spatial frequency of thetruncated D6 elements has a significant effect on the waythe central D6 pattern is perceived. When the spatialfrequencies are equal (1.7-cpd central and flanker fre-quency), observers reported the usual pattern of motioncapture for small gaps shifting into perpendicular percep-tions for large gaps.27 Increasing the frequency of thetruncated D6 elements (2.8 cpd) results in a diminishedbias at intermediate gaps. At a truncation spatial fre-quency of 5.0 cpd there is an effective loss of motion cap-ture. The complete lack of capture for dissimilar spatialfrequencies suggests that integration mechanisms acrossgaps do not utilize information from widely different spa-tial frequency channels.

4. DISCUSSIONThe aim of this study was to determine those circum-stances under which the perceived direction of motion of a

Fig. 9. Effect of truncation-segment spatial frequency on a 1.7-cpd central D6 segment. A small shift in frequency of the trun-cation (2.8 cpd) is sufficient to decrease integration noticeably.The influence of the truncations is virtually absent for a fre-quency of 5 cpd.

1468 J. Opt. Soc. Am. A/Vol. 20, No. 8 /August 2003 G. Loffler and H. S. Orbach

featureless contour is influenced and determined by adja-cent features. In agreement with previous work, we findthat the proximity between contour and features plays acrucial role. An additional factor is collinearity. The ef-fect of features is identical whether individual compo-nents are perfectly aligned or misoriented by 10°. Stron-ger orientation differences reduce the effect of features,and even for the closest proximity, motion capture is ab-sent when components are skewed by 45°. The contrastof the features is a third factor. Clearly visible termina-tors fail to capture a high-contrast line segment whentheir contrast is reduced to values below 20%. Thefourth factor is spatial frequency. By replacing the pro-file of a line (wide spatial-frequency range) with that of aD6 (limited spatial-frequency bandwidth), we find thatthe capturing effect of the features decreases with in-creasing difference between the spatial frequencies of con-tour and features. A fifth factor is the size of the image.We find that integration scales perfectly (i.e., the range offeature effects decreases with increasing viewing dis-tance), and feature effects decrease with decreasing pat-tern size. Finally, motion capture exhibits a dependencyon the kind of the features. While a terminated line cap-tures a line segment, a single dot does not.

Most of these observations can be explained on the ba-sis of higher-level, cognitive processes. For example, itcould be argued that the appearance of a single object cov-ered by an aperture mask suffers when individual partsare of sufficiently different contrast, orientation, or width(in the case of D6 patterns with different spatial frequen-cies) or when they are of different kind (dots with line seg-ment). Following this line of argument, whenever thestimulus lacks the appearance of a common underlyingobject, cognitive capture breaks down. However, we be-lieve that such high-level explanations are not requiredand that, instead, low-level mechanisms can provide analternative explanation for our data.

The following discussion is centered around the ques-tion of whether motion capture in our experiments is theresult of low-level mechanisms or high-level cognitive pro-cesses. While we cannot and do not want to rule out theexistence of high-level influences, we argue that underour experimental conditions, performance can be success-fully and satisfactorily explained by low-level computa-tions. These computations do not have to rely on objectidentification and three-dimensional-scene interpreta-tion. It should be stressed, however, that the main ob-servations in our study are equally important for the un-derstanding of the factors affecting motion integrationwhether one attempts a low-level or a high-level explana-tion.

A. Effect of Three-Dimensional Interpretation onMotion IntegrationIt has long been considered that a crucial task for the vi-sual system is to divide a scene into meaningful objects,especially in the presence of occlusion. In the specificcase of object motion, the visual system needs to solve aserious computational problem: to correctly utilize fea-tures. Features are often created accidentally, as in thecase of occlusion or at points of intersections of overlap-ping gratings. Cues from such accidental features are of-

ten inconsistent with an object’s true motion and aretherefore misleading. An intrinsic/extrinsic rule hasbeen proposed according to which the visual system ig-nores such accidental (extrinsic) features that are notphysical properties of an object and relies only on real (in-trinsic) terminators.20

It is clear that sorting real from accidental features is anontrivial task, and this issue has inspired considerableresearch.21,28–31 While there is no doubt about the differ-ent role extrinsic and intrinsic features play in some situ-ations, there is growing evidence against a strict imple-mentation of the scheme proposed by Shimojo et al.20 Inparticular, there are cases when extrinsic terminators doshow an influence on motion perception.28–30 This effectof extrinsic terminators might reflect the visual system’sdifficulty in distinguishing between accidental and realfeatures. To appreciate this problem, consider the situa-tion of a static line shown behind an invisible aperture(Fig. 10).

The terminators created by the border of the invisibleaperture are not a real feature of the line but are ratherthe result of occlusion (i.e., they are extrinsic). However,because they lack the typical T-junction appearance ofmonocular depth cues, they are indistinguishable fromreal features. A strict application of the intrinsic/extrinsic rule would suggest that such ‘‘pseudoreal’’ in-trinsic features, just as real intrinsic terminators, wouldtrigger strong signal propagation. What does this meanfor a partly occluded line in a multi-aperture display? Inthe case of visible, outlined apertures, this rule straight-forwardly predicts that the visual system bases its com-putations on the real-line terminators and ignores thoseextrinsic terminators accidentally created by the aper-tures. However, if individual apertures are invisible, theintrinsic/extrinsic rule suggests that the ‘‘pseudoreal’’ in-trinsic terminators determine motion within individualapertures and reduce the interactions from real termina-tors.

But does a display really have to exhibit cues, such asoutlined apertures, supporting the notion of an occludedobject, for motion integration to occur? This question can

Fig. 10. T-junction, ‘‘pseudoreal’’ intrinsic terminator, and realintrinsic terminator. A T-junction is accidentally generated bythe intersection of an outlined (in this example, rectangular) ap-erture with a contour. If the same aperture has no visible bor-der and is cut from a foreground that is identical to the line back-ground inside the aperture, the intersection terminator is stillaccidental but is indistinguishable from a real terminator (shownon the right). Although a ‘‘pseudoreal’’ intrinsic and a real in-trinsic terminator are visually identical, they are not identicalphysically, as one is the result of occlusion (albeit invisible) andthe other an intrinsic part of a line. The terms ‘‘pseudoreal’’ in-trinsic and real intrinsic aim to clarify the distinction.

G. Loffler and H. S. Orbach Vol. 20, No. 8 /August 2003 /J. Opt. Soc. Am. A 1469

be answered by comparing our results with those from astudy that might, on the basis of the intrinsic/extrinsicrule, be expected to give very different results. By out-lining individual apertures and also providing binoculardepth cues consistent with a partly occluded object,Ben-Av and Shiffrar10 made the associated terminatorsexplicitly extrinsic. In contrast, the design used in ourstudy has six intrinsic terminators, two real-line endingsand four ‘‘pseudoreal’’ terminators accidentally created bythe intersections of the line with each of the three invis-ible aperture borders (Fig. 1).32 However, we still ob-served motion capture under these conditions. Moreover,a comparison of the results of Ben-Av and Shiffrar withthose from our experiments reveals strong similarities be-tween these two experimental designs. The space con-stant (0.7°, experiment 2) of the Gaussian that was fittedto our data matches the 0.7° value calculated from thedata in Ben-Av and Shiffrar’s study.

If it can be assumed that the moving lines in our ex-periment are comparable to the moving diamond outlineused by Ben-Av and Shiffrar, we can conclude that motionintegration is just as strong without extrinsic terminatorsproduced by monocular (and binocular) depth cues. Mo-tion capture by real terminators also occurs with inter-vening ‘‘pseudoreal’’ intrinsic aperture terminators.Moreover, the spatial range for motion capture is similar.This suggests that motion integration mechanisms relyon depth cues to a much lesser extent than has previouslybeen postulated.

It is suggestive that a similar conclusion can be drawnfrom an examination of the barber-pole illusion. It hasbeen proposed20 that three-dimensional interpretationsstrongly affect this illusion. However, Mulligan33 has re-produced the basic properties of the illusion using pat-terns that appear entirely two dimensional.

That three-dimensional information plays a minor rolein motion integration receives further support from a mo-tion model that predicts the psychophysical data pre-sented here by using only the contrast values of thestimuli.34,35 Three-dimensional information in the formof a special treatment for occlusion cues (T junctions) isnot required for this model to make accurate predictions.

In conclusion, we believe that there is an experimen-tally decidable issue between occlusion theories that holdthree-dimensional interpretations as the primary deter-minant of motion integration and those that considerthree-dimensional cues, such as stereo, as minor perturb-ers of two-dimensional processes. We believe that thebalance of experimental evidence supports the latter po-sition.

B. Object InterpretationsA number of experimental manipulations were used totest whether the subject’s interpretation of the scene iscrucial for motion capture. We found evidence challeng-ing a decisive higher-order influence on perception.Truncated D6 patterns exhibited the same pattern of re-sponses as did line segments, even though they mani-festly lack the appearance of a continuous object coveredby an aperture mask. Furthermore, even manipulatingthe polarity of the different parts of the D6 patterns didnot affect capture. Taken together, these results suggest

that figural interpretation plays a minor role in the pro-cess of motion integration. However, motion capture didnot occur between highly dissimilar objects. Significantintegration was not observed when single dots were sub-stituted for line endings. There is evidence from otherstudies both for and against the lack of integration withsingle dots. Nakayama and Silverman6 reported thatdots do not capture the nonrigid perception of a translat-ing curve when they are displaced off the curve. On theother hand, Weiss and Adelson36 found dots capture theotherwise nonrigid perception of a rotating ellipse.

It therefore remains to be seen to what extent motioncapture can be explained by a ‘‘dumb,’’ noncognitive inte-gration of motion signals. Whatever the mechanisms,our experiments show that motion integration occurs forsome object configurations but not for others, and thismust be explained by any model dealing with the integra-tion of motion signals across space. This challenge is ad-dressed in Ref. 35 (this issue), in which we propose low-level mechanisms that predict the reported data withoutrelying on knowledge about underlying objects or on athree-dimensional scene analysis based on monoculardepth cues. The model employs lateral excitation be-tween spatially separated sites that are facilitated by col-linearity. This computational design enables the modelto predict capture of a line segment by line terminators(exhibiting similar orientational preferences) but not bydots.

C. Implications for the Neuronal Hardware underlyingMotion IntegrationThe experimental results can be used to make two impor-tant predictions about the detailed structure of a hypoth-esized low-level neuronal network underlying motion in-tegration. First, the absence of capture for sufficientdifferences in spatial frequencies between contour andtruncations argues for interactions within a small rangeof spatial-frequency channels. Second, based on this andthe observed linear scaling, speculations can be madeabout the length of the neuronal connections responsiblefor motion integration across space. These may eitherscale with the spatial-frequency channel they are associ-ated with or may be of fixed length, independent of spatialfrequency. The scaling results support the notion thatfor different spatial frequency channels, the arborizationsmade by cells are set in terms of the number of connectingneighboring neurons. Because of the inverse proportion-ality of cell spacing with spatial frequency, this wouldyield the result observed here: Integrations occur overdifferent spatial ranges when the stimulus size ischanged.

In summary, our experiments suggest a picture of mo-tion integration that is fairly robust with respect to con-tour orientations, contrasts, and spatial frequencies. Webelieve that, on balance, this suggests low-level mecha-nisms rather than higher-level effects in the form of objectinterpretation or three-dimensional analysis of a scene.There is, of course, the possibility that the stimuli them-selves determine the processing level at which they areanalyzed. In this vein, it is conceivable that low-levelcomputations are observed with very simple objects suchas the lines tested here. More complicated objects may

1470 J. Opt. Soc. Am. A/Vol. 20, No. 8 /August 2003 G. Loffler and H. S. Orbach

well rely on higher-level processing in addition to the low-level interactions. In any case, the low-level computa-tions observed here with simple lines would certainlyserve as a reasonable first stage in the analysis of ex-tended object motion.

We suggest that for motion integration, higher-level de-scriptions (e.g., Gestalt principles) may therefore be mostprofitably regarded as resulting from low-levelmechanisms.35 We also suggest that, although theintrinsic/extrinsic distinction may well be a useful ecologi-cal principle, this as a computational goal for the visualsystem should not be invoked as a causative explanationnor is it successfully predictive of a variety of experimen-tal results.

ACKNOWLEDGMENTSThe paper benefited greatly from discussions with MarkGeorgeson and Gael E. Gordon. We thank our subjectsfor participating in the experiments and two reviewers forhelpful comments. Grants from the Visual ResearchTrust, from the Fordergemeinschaft Deutscher Augenop-tiker (FDA), and from the Engineering and Physical Sci-ences Research Council (UK) to G. Loffler supported thisresearch. Initial results of this research were first re-ported at the Annual Meeting of the Association for Re-search in Vision and Ophthalmology, Fort Lauderdale,Florida, May 2000.37

Corresponding author G. Loffler can be reached at De-partment of Vision Sciences, Glasgow Caledonian Univer-sity, Cowcaddens Road, Glasgow G4 0BA, United King-dom; e-mail, [email protected].

REFERENCES AND NOTES1. This paper is concerned with motion in the two-dimensional

frontoprallel plane; the term ‘‘rigidity’’ (or coherence) de-scribes the perception of a single rigid object moving in thisplane.

2. E. H. Adelson and J. A. Movshon, ‘‘Phenomenal coherence ofmoving visual patterns,’’ Nature 300, 523–525 (1982).

3. V. P. Ferrera and H. R. Wilson, ‘‘Perceived direction of mov-ing two-dimensional patterns,’’ Vision Res. 30, 273–287(1990).

4. L. S. Stone, A. B. Watson, and J. B. Mulligan, ‘‘Effect of con-trast on the perceived direction of a moving plaid,’’ VisionRes. 30, 1049–1067 (1990).

5. C. Yo and H. R. Wilson, ‘‘Perceived direction of moving two-dimensional patterns depends on duration, contrast and ec-centricity,’’ Vision Res. 32, 135–147 (1992).

6. K. Nakayama and G. H. Silverman, ‘‘The aperture problemI. Perception of nonrigidity and motion direction in trans-lating sinusoidal lines,’’ Vision Res. 28, 739–746 (1988).

7. M. Shiffrar and M. Pavel, ‘‘Percepts of rigid motion withinand across apertures,’’ J. Exp. Psychol. Hum. Percept. Per-form. 17, 749–761 (1991).

8. J. Lorenceau and M. Shiffrar, ‘‘The influence of terminatorson motion integration across space,’’ Vision Res. 32, 263–273 (1992).

9. H. Wallach, ‘‘Uber visuell wahrgenommene Bewegungsrich-tung,’’ Psychol. Forsch. 20, 325–380 (1935).

10. M. B. Ben-Av and M. Shiffrar, ‘‘When ambiguous becomesunambiguous,’’ Invest. Ophthalmol. Visual Sci. 34, 1028(1993).

11. E. Mingolla, J. T. Todd, and J. F. Norman, ‘‘The perceptionof globally coherent motion,’’ Vision Res. 32, 1015–1031(1992).

12. H. S. Orbach and H. R. Wilson, ‘‘Fourier and non-Fourierterminators in motion perception,’’ Invest. Ophthalmol. Vi-sual Sci. 35, 1827 (1994).

13. M. Shiffrar, X. Li, and J. Lorenceau, ‘‘Motion integrationacross differing image features,’’ Vision Res. 35, 2137–2146(1995).

14. F. L. Kooi, ‘‘Local direction of edge motion causes and abol-ishes the barberpole illusion,’’ Vision Res. 33, 2347–2351(1993).

15. J. Lorenceau, M. Shiffrar, N. Wells, and E. Castet, ‘‘Differ-ent motion sensitive units are involved in recovering the di-rection of moving lines,’’ Vision Res. 33, 1207–1217 (1993).

16. W. H. Swanson, H. R. Wilson, and S. C. Giese, ‘‘Contrastmatching data predicted from contrast increment thresh-olds,’’ Vision Res. 24, 63–75 (1984).

17. G. Loffler and H. S. Orbach, ‘‘Anisotropy in judging the ab-solute direction of motion,’’ Vision Res. 41, 3677–3692(2001).

18. To avoid confusion, there is, of course, no ‘‘real’’ motion for atranslating line presented on a monitor. Rather, the suc-cessive switching on and off of pixels creates the illusion ofmotion. So when we are talking about the real motion of arigid line, this should be understood as the motion of therigid object that could produce the stimulation.

19. M. B. Ben-Av and M. Shiffrar, ‘‘Disambiguating velocity es-timates across image space,’’ Vision Res. 35, 2889–2895(1995).

20. S. Shimojo, G. H. Silverman, and K. Nakayama, ‘‘Occlusionand the solution to the aperture problem for motion,’’ VisionRes. 29, 619–626 (1989).

21. S. Grossberg and E. Mingolla, ‘‘Neural dynamics of motionperception: direction fields, apertures, and resonantgrouping,’’ Percept. Psychophys. 53, 248–278 (1993).

22. E. Peterhans and R. Von der Heydt, ‘‘Mechanisms of con-tour perception in monkey visual-cortex. 2. Contoursbridging gaps,’’ J. Neurosci. 9, 1749–1763 (1989).

23. The space constants of the fitted Gaussians are 2.8° for theD6 patterns and 2.1° for lines. The (albeit small) differ-ence in space constants would be even further reduced byfixing the asymptotes of the Gaussians to 45° (the expectedvalue of perpendicular motion for an isolated line segment).

24. Obviously, this does not prove that higher-order processescannot influence motion integration in addition to the low-level processes indicated here.

25. One may argue that a quantitative comparison betweenthis and previous experiments should take the distance be-tween the dot and the closest part of the line segment as asubstitute for the inter-aperture gap. However, even giventhis correction, the results in this experiment still show abias that is significantly closer to the perpendicular thanfor line terminators.

26. R. L. DeValois, E. W. Yund, and N. Hepler, ‘‘The orientationand direction selectivity of cells in macaque visual cortex,’’Vision Res. 22, 531–544 (1982).

27. Note that the 1.7-cpd flanker frequency condition here (Fig.9) is not identical to the previous experiment on D6 pat-terns (Fig. 7). In the previous experiment, the contrast forthe central patch and the truncations were equal but hadopposite signs. Here the contrasts of the three parts of thedisplay were identical. This gives the impression of a setof aligned, but disconnected, black and white stripes, whichmight be predicted to appear more coherent than in thecase of contrast-alternated stripes used before. The re-sults show that this manipulation does not greatly affectobservers’ judgments.

28. E. Castet and S. Wuerger, ‘‘Perception of moving lines: in-teractions between local perpendicular signals and 2D mo-tion signals,’’ Vision Res. 37, 705–720 (1997).

29. L. Liden and E. Mingolla, ‘‘Monocular occlusion cues alterthe influence of terminator motion in the barber pole phe-nomenon,’’ Vision Res. 38, 3883–3898 (1998).

30. N. Rubin and S. Hochstein, ‘‘Isolating the effect of one-dimensional motion signals on the perceived direction ofmoving 2-dimensional objects,’’ Vision Res. 33, 1385–1396(1993).

G. Loffler and H. S. Orbach Vol. 20, No. 8 /August 2003 /J. Opt. Soc. Am. A 1471

31. G. Vallortigara and P. Bressan, ‘‘Occlusion and the percep-tion of coherent motion,’’ Vision Res. 31, 1967–1978 (1991).

32. Note that, although the apertures in our experiments wereinvisible, such ‘‘pseudoreal’’ aperture terminators could con-ceivably be classified as extrinsic. This is because the ap-ertures are physically absent in the sense of having zerocontrast, but, as the line moves, the terminators trace outthe shape of the aperture. (This possibility was raised, inconversation, by Mark Georgeson). The visual systemcould use this information to classify the terminator as aris-ing from a line occluded by a circular aperture and hence beextrinsic in an elaborated intrinsic/extrinsic classification.To test this, the circular apertures were replaced with in-visible rectangles in a control condition. The orientation ofthe rectangular apertures was perpendicular to the line’sorientation. This eliminated any difference between realand pseudoreal terminators. Nonetheless, the pattern of

response was indistinguishable from that presented in Fig.3, invalidating such a hypothesized modification of theintrinsic/extrinsic rule.

33. J. B. Mulligan, ‘‘A continuous version of the barber-pole il-lusion,’’ Invest. Ophthalmol. Visual Sci. 32, 829 (1991).

34. G. Loffler, ‘‘The integration of motion signals across space,’’Ph.D. thesis (Glasgow Caledonian University, Glasgow, UK,1999).

35. G. Loffler and H. S. Orbach, ‘‘Modeling the integration ofmotion signals across space,’’ J. Opt. Soc. Am. A 20, 1472–1489 (2003).

36. Y. Weiss and E. H. Adelson, ‘‘Integration and segmentationof nonrigid motion,’’ Invest. Ophthalmol. Visual Sci. 36,S228 (1995).

37. H. S. Orbach and G. Loffler, ‘‘What determines motion inte-gration across apertures?’’ Invest. Ophthalmol. Visual Sci.41, 2889 (2000).