10
Attentional effects on afterimages: Theory and data Joshua Wede, Gregory Francis * Purdue University, Department of Psychological Sciences, 703 Third Street, West Lafayette, IN 47907-2004, United States Received 27 September 2006; received in revised form 19 March 2007 Abstract We explore attentional effects on afterimages in the framework of the FACADE model of visual perception. We first show that the FACADE model can account for the experimental findings of Suzuki and Grabowecky [Suzuki, S., & Grabowecky, M. (2003). Attention during adaptation weakens negative afterimages. Journal of Experimental Psychology: Human Perception and Performance 29, 793–807] that afterimages are weaker when the inducing stimulus is attended. We then analyze the model’s behavior with attentional influences on a two-stimulus afterimage studied by Francis and Rothmayer [Francis, G., & Rothmayer, M. (2003). Interactions of afterimages for ori- entation and color: Experimental data and model simulations. Perception & Psychophysics 65, 508–522]. The model predicts that atten- tional focus directed towards the first stimulus has little effect on afterimage strength. In contrast, the model predicts that attentional focus on the second stimulus should increase the strength of the afterimage compared to when attention is focused elsewhere. Moreover, the model predicts that the attentional effects on the second stimulus should vary with time after offset of the second inducing stimulus. All of the model predictions are validated in an experiment. The model and experimental results extend and clarify previous explanations of attentional effects and afterimages. Ó 2007 Elsevier Ltd. All rights reserved. Keywords: Afterimage; Attention; Neural network; Filling-in 1. Introduction Negative afterimages are complementary to the inducing image in both color and brightness. Several studies have shown that these afterimages involve retinal luminance adaptation mechanisms. Loomis (1978) compared adapta- tion to steady and flickering light, equated for average ret- inal intensity, and found that flickering light produces a weaker negative afterimage, which rules out receptor bleaching as a general mechanism for negative afterimages. Burbeck (1986) and Burbeck and Kelly (1984) showed that local adaptation is responsible for the formation of after- images. While the evidence supporting a retinal contribu- tion to negative afterimages is very strong, there is also evidence suggesting an important cortical influence. Virsu and Laurinen (1977) showed that adaptation to a sinusoi- dal grating alternating in half steps causes a patterned afterimage, even though the temporal average of stimula- tion of the retina corresponds to a homogenous field of light. Shimojo, Kamitani, and Nishida (2001) described an afterimage that arises at locations with no retinal stim- ulation, but instead results from filling-in of luminance or color within illusory contours formed by the inducing stim- ulus. Consistent with the idea that cortical mechanisms play an important role, Suzuki and Grabowecky (2003) described attentional effects on the strength and duration of negative afterimages. In two of the Suzuki and Grabowecky (2003) experi- ments, the inducing stimulus contained two overlapped fig- ures, and subjects were instructed to attend one and ignore the other. In another experiment, the inducer was a single fig- ure, and attention was directed to the inducer or to a central, rapid stream of digits. To direct attention of the observers, subjects had to count the number of times a certain feature appeared on the attended stimulus. The strength of the after- image was measured using subjective ratings and by record- ing reported onset and offset latencies of afterimages. 0042-6989/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.visres.2007.04.024 * Corresponding author. E-mail address: [email protected] (G. Francis). www.elsevier.com/locate/visres Vision Research 47 (2007) 2249–2258

Attentional effects on afterimages: Theory and data

  • Upload
    purdue

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

www.elsevier.com/locate/visres

Vision Research 47 (2007) 2249–2258

Attentional effects on afterimages: Theory and data

Joshua Wede, Gregory Francis *

Purdue University, Department of Psychological Sciences, 703 Third Street, West Lafayette, IN 47907-2004, United States

Received 27 September 2006; received in revised form 19 March 2007

Abstract

We explore attentional effects on afterimages in the framework of the FACADE model of visual perception. We first show that theFACADE model can account for the experimental findings of Suzuki and Grabowecky [Suzuki, S., & Grabowecky, M. (2003). Attentionduring adaptation weakens negative afterimages. Journal of Experimental Psychology: Human Perception and Performance 29, 793–807]that afterimages are weaker when the inducing stimulus is attended. We then analyze the model’s behavior with attentional influences ona two-stimulus afterimage studied by Francis and Rothmayer [Francis, G., & Rothmayer, M. (2003). Interactions of afterimages for ori-entation and color: Experimental data and model simulations. Perception & Psychophysics 65, 508–522]. The model predicts that atten-tional focus directed towards the first stimulus has little effect on afterimage strength. In contrast, the model predicts that attentionalfocus on the second stimulus should increase the strength of the afterimage compared to when attention is focused elsewhere. Moreover,the model predicts that the attentional effects on the second stimulus should vary with time after offset of the second inducing stimulus.All of the model predictions are validated in an experiment. The model and experimental results extend and clarify previous explanationsof attentional effects and afterimages.� 2007 Elsevier Ltd. All rights reserved.

Keywords: Afterimage; Attention; Neural network; Filling-in

1. Introduction

Negative afterimages are complementary to the inducingimage in both color and brightness. Several studies haveshown that these afterimages involve retinal luminanceadaptation mechanisms. Loomis (1978) compared adapta-tion to steady and flickering light, equated for average ret-inal intensity, and found that flickering light produces aweaker negative afterimage, which rules out receptorbleaching as a general mechanism for negative afterimages.Burbeck (1986) and Burbeck and Kelly (1984) showed thatlocal adaptation is responsible for the formation of after-images. While the evidence supporting a retinal contribu-tion to negative afterimages is very strong, there is alsoevidence suggesting an important cortical influence. Virsuand Laurinen (1977) showed that adaptation to a sinusoi-dal grating alternating in half steps causes a patterned

0042-6989/$ - see front matter � 2007 Elsevier Ltd. All rights reserved.doi:10.1016/j.visres.2007.04.024

* Corresponding author.E-mail address: [email protected] (G. Francis).

afterimage, even though the temporal average of stimula-tion of the retina corresponds to a homogenous field oflight. Shimojo, Kamitani, and Nishida (2001) describedan afterimage that arises at locations with no retinal stim-ulation, but instead results from filling-in of luminance orcolor within illusory contours formed by the inducing stim-ulus. Consistent with the idea that cortical mechanismsplay an important role, Suzuki and Grabowecky (2003)described attentional effects on the strength and durationof negative afterimages.

In two of the Suzuki and Grabowecky (2003) experi-ments, the inducing stimulus contained two overlapped fig-ures, and subjects were instructed to attend one and ignorethe other. In another experiment, the inducer was a single fig-ure, and attention was directed to the inducer or to a central,rapid stream of digits. To direct attention of the observers,subjects had to count the number of times a certain featureappeared on the attended stimulus. The strength of the after-image was measured using subjective ratings and by record-ing reported onset and offset latencies of afterimages.

2250 J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258

Suzuki and Grabowecky (2003) found that afterimageswere weaker for attended inducing stimuli. For example,in their second experiment, observers reported that, onaverage, an unattended inducer produced a visible after-image 1.5 s after stimulus offset and that this afterimagepercept lasted for a total of 4.3 s. In contrast, whenobservers attended the inducing stimulus, the afterimagedid not appear until 1.9 s after stimulus offset. Theresulting afterimage also had a shorter duration of3.5 s. Similar results were found for several differentmanipulations of attentional focus and measurements ofafterimage strength. Although Lou and Chen (2005)reported an opposite effect of attention on afterimages,Tsuchiya (2006) replicated the Suzuki and Grabowecky(2003) finding. The different findings may be related tothe stimuli and the experimental measures of afterimagestrength.

To explain their results, Suzuki and Grabowecky (2003)proposed that each visual stimulus involves at least threetypes of adaptation: (a) luminance-based adaptation of ret-inal cells, (b) adaptation of polarity-selective cortical cells,and (c) adaptation of polarity-independent cortical cells.The first two of these types of adaptation increase thestrength of afterimages because such adaptation leads toneural after-responses among cells that code differences insurface appearance. In contrast, adaptation of the polar-ity-independent cells reduces the strength of the afterimage,because polarity-independent cells cannot represent coloror brightness. Suzuki and Grabowecky (2003) proposedthat attention facilitated the adaptation of polarity-inde-pendent cells, thus minimizing any effect attention mighthave had on the adaptation of polarity-sensitive cells.Based on different effects of attention on afterimages,Lou (2001) also speculated that the effect of selective visualattention on negative afterimages must arise from polarity-independent mechanisms.

A similar hypothesis about the role of attention in cor-tical circuits of the visual system was proposed by Raizadaand Grossberg (2001, 2003). They showed how attentionalsignals could be integrated in a laminar model (LAMIN-ART) of visual cortex that deals primarily with polarity-independent orientation-sensitive cells. In this model,attention effects are mediated by a folded feedback mecha-nism that enhances attended stimuli and suppresses ignoredstimuli. The folded feedback mechanism also insures thatattention alone does not generate hallucinatory representa-tions of visual percepts. Raizada and Grossberg demon-strated that their model could account for a variety ofpsychophysical and neurophysiological data on the effectof attention.

These ideas are consistent with neurophysiological stud-ies that show attention modulates responses of cells in earlyvisual cortex. Single-cell recordings of orientation-selectivecells show an increased response to stimuli when attentionis directed towards objects within the cell’s receptive field(Vidyasagar, 1998). Motter (1993) found a similar increasein activity of orientation selective cells in areas V1 and V2.

McAdams and Reid (2005) have shown that attentionmodulates the responses of orientation specific simple cellsin area V1.

In the next section we show how the basic ideas onattention proposed by Raizada and Grossberg (2001,2003) and Suzuki and Grabowecky (2003) would actuallywork in the framework of a more general model that mech-anistically generates afterimages. We then show that themodel predicts that attention effects should produce a quitedifferent result for a different type of afterimage. Finally,we test the model predictions.

2. Model analysis

The LAMINART model proposed by Raizada andGrossberg (2001, 2003) is part of a larger theory of visualperception known as FACADE (Form And Color AndDEpth) (Grossberg, 1994, 1997). FACADE is a far-reach-ing theory that has been used to explain a wide variety ofvisual percepts, including figure-ground distinctions(Grossberg, 1997), brightness perception (Grossberg &Hong, 2006), metacontrast masking (Francis, 1997), neoncolor spreading (Grossberg & Mingolla, 1985a; Grossberg& Yazdanbakhsh, 2005) and three-dimensional surfaceperception (Grossberg & Howe, 2003). For the present dis-cussion, we use only those parts of the model that are rel-evant to a consideration of attentional effects onafterimages. These include the basic distinction betweenpolarity-independent boundary processing and polarity-dependent surface feature processing, the role of attentionon boundary processing, the generation of after-responsesin these systems, and the properties of a filling-in stagewhere boundary and feature information is combined tocreate a visible percept. These model parts are schematizedin Fig. 1.

FACADE consists of two major processing streams(Grossberg & Mingolla, 1985a, 1985b). A boundary con-tour system (BCS) processes boundary or edge informa-tion, while a feature contour system (FCS) usesinformation from the BCS to allow diffusion of surfaceproperties like color and brightness. The BCS detects ori-ented edges. The FCS uses the BCS information to deter-mine where surface information spreads; and thedistribution of activity across the filling-in stage corre-sponds to the visual percept. Attention from other areasof the brain can influence the activity of cells in the BCS,which can indirectly affect the activity of cells in the FCS.

Embedded within the FACADE architecture are gateddipole circuits (Grossberg, 1972). A gated dipole containstwo pathways that compete as signals pass from lower tohigher levels. A signal passing through one pathway inhib-its a signal passing through the competing pathway. At off-set of stimulation, a gated dipole circuit produces areduction in cross channel inhibition from the stimulatedchannel to the unstimulated channel. This reduction ininhibition leads to a rebound of activity in the unstimulatedpathway.

Input image

Filling-in

Orientationgated dipoles

Colorgated dipoles

FCSBCS

Boundary grouping

Attention

Fig. 1. A schematic of the main components of FACADE theory. Theinput image feeds into a retinotopic representation of black and white,which compete in a gated dipole circuit. The gated dipole circuit producescomplementary after-responses. The black and white information thenfeeds into edge detection in the Boundary Contour System (BCS), whichalso contains a gated dipole circuit whose after-responses code orthogonalorientations. The BCS signals are grouped together and these groupings ofedges contain the spread of black and white information in the filling-instage of the Feature Contour System (FCS) to limit the spread of colorand brightness information.

Image

Time=4.75

Time=5.5

Time=8.00

BoundariesColor gated

dipoleFilled in percept

Fig. 2. Simulations of a negative afterimage that demonstrate the model’sresponses at three moments in time. The dominant orientation at eachpixel in the third column is indicated by black for vertical and white forhorizontal. The first row shows the model’s behavior just before offset ofthe inducing stimulus. The second row shows the model’s behavior one-half second after offset of the inducing stimulus. The orientation after-responses do not support the color after-responses at the filling-in stagebecause the orientations are orthogonal to the arrangement of colors. Thethird row shows the model’s behavior 3 s after offset of the inducingstimulus. The orientation after-responses have faded away, so the colorafter-responses can now generate their own boundary signals. Theseboundary signals support the representation of the color after-responses atthe filling-in stage.

J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258 2251

There are separate gated dipole circuits in the FACADEarchitecture that code for orientation (in the BCS) andcolor (in the FCS). Thus, at each pixel location there aretwo types of after-responses in the model. One codes theopposite color (black/white) and the other codes the oppo-site orientation (vertical/horizontal). These correspond tothe polarity-sensitive and polarity-independent after-responses hypothesized by Suzuki and Grabowecky (2003).

One can better understand the model’s dynamic behav-ior by considering how negative afterimages appear. Con-sider the five-second presentation and then removal of asimple inducing stimulus, black and white vertical barson a gray background. Fig. 2 schematizes the model’sbehavior at different points in time for different rows. Col-umn one of each row shows a representation of the imageplane. Column two of each row shows a representation ofthe output of the gated dipoles that code the model’s initial

response to the image luminances. Column three of eachrow shows a representation of the output of the BCS. Herevertical orientations are coded as black/dark gray pixelsand horizontal orientations are coded as white/light graypixels. No boundary response is coded as middle gray.The last column shows the distribution of activities codingbright and dark across the filling-in stage. All of the modelcalculations are described in detail in Appendix A.

The first row shows the model’s behavior just before theinducing stimulus disappeared (5 s). The output of thecolor gated dipole stage looks very similar to the inducingimage. The BCS stage represents the edges of the inducingstimulus. It shows a dense array of vertical (black) edges.The filling-in stage uses the BCS signals to contain thespread of signals from the color gated dipole. In this case,the BCS boundaries combine with the color gated dipolesignals to produce a veridical percept of the inducingstimulus.

The second row of Fig. 2 shows the model’s behaviorone-half second after offset of the inducing stimulus. Theoutput of the color gated dipole stage is the color-negativeof the inducing stimulus. The BCS stage has boundaries atthe same locations as when the inducing stimulus was pres-ent, but of the opposite orientation. Although signals fromthe color gated dipoles feed in to the orientation calcula-tions, the drop in overall input leads to after-responsesamong the orientation gated dipoles. As a result, theboundaries are the opposite local orientation of the induc-ing stimulus edges. The orientation after-responses areamong horizontally tuned (white) orientation-sensitive

0 1 2 3 4 5 6 7Time from inducer offset (seconds)

0

0.1

0.2

0.3

0.4

Afte

rimag

e st

reng

th

Without attentionWith attention

Fig. 3. Model calculations of afterimage strength as a function of delayafter inducing stimulus offset. Attentional focus on the inducing stimulusleads to a weaker afterimage strength with a longer delay and shorterduration than the unattended inducing stimulus. See the text and appendixfor details of the afterimage strength calculation.

Image

Time=1

Time=2

Time=3

BoundariesColor gated dipole

Filled in percept

Fig. 4. Simulations of a two-inducer afterimage that demonstrate themodel’s responses at three moments in time. The dominant orientation ateach pixel in the third column is indicated by black for vertical and whitefor horizontal. The first row shows the model’s behavior just before offsetof S1. The second row shows the model’s behavior just before offset of S2.The third row shows the model’s behavior 1 s after offset of S2. Theorientation after-responses from S2 support the representation of the colorafter-responses from S1 at the filling-in stage.

2252 J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258

cells. At the filling-in stage, the horizontal boundaries allowcolor signals to spread horizontally but not vertically. As aresult, the inputs from the color gated dipole spread hori-zontally across the filling-in plane. Within each horizontalrow the dark and bright signals cancel each other out andthere is no visible afterimage.

The third row of Fig. 2 shows the model’s behavior 3 safter offset of the inducing stimulus. The output of thecolor gated dipole stage is again the color-negative of theinducing stimulus, although it is slowly fading in strength.The orientation gated dipoles adapt and re-adapt morequickly than the color gated dipoles (Wede & Francis,2006), and as the adaptation effects disappear among theorientation gated dipoles, the polarity contrasts from thecolor gated dipoles are now able to establish their ownboundaries. As a result, the boundaries are now the sameorientations as the inducing stimulus edges. At the filling-in stage, the situation is similar to the inducing image,but now the color signals that are trapped by the bound-aries are those generated by after-responses of the colorgated dipoles. The pattern of activities at the filling-in stagecorresponds to the visible negative afterimage.

Thus, the orientation after-responses at offset of theinducing stimulus are unable to support the color after-responses at the filling-in stage. The appearance of a nega-tive color afterimage in the FACADE model requires thatthe color after-responses generate their own boundary sig-nals. Because the color after-responses are weak, they can-not establish their own boundaries until the orientationafter-responses have faded.

These properties explain why attentional focus on theinducing stimulus leads to a weaker and delayed afterim-age. Attentional focus on the inducing stimulus generatesstronger orientation signals, which leads to greater adapta-tion and stronger orientation after-responses. With greateradaptation, it takes longer for the orientation after-responses to fade and thus for the color after-responsesto establish their own boundaries.

Fig. 3 plots the strength of afterimage signals at the fill-ing-in stage of the model as a function of time after induceroffset. The two curves are for with and without attentionalfocus on the inducer. Consistent with the description givenabove, attentional focus on the inducing stimulus leads to aweaker model afterimage that appears later and lasts for ashorter duration. The model’s explanation is conceptuallysimilar to the explanation proposed by Suzuki and Grab-owecky (2003). Importantly, the model both elaborateson the previous explanation by giving a precise descriptionof the mechanisms that are involved and by describing howthe various after-responses interact with each other at a fill-ing-in stage to produce visible percepts and afterimages.

Moreover, the model makes a novel prediction of theaffect of attention in the context of an afterimage that isproduced by a sequence of inducing stimuli. Francis andRothmayer (2003) and Vidyasagar et al. (1999) studiedafterimages that appear after sequential viewing of twoorthogonally related inducing stimuli. When observers

viewed orthogonally related inducing bar gratings, theyreported seeing an afterimage that looked similar to theinducer presented first. In contrast, if the inducers wereof the same orientation, for example if both were horizon-tal gratings, observers reported few afterimages.

Francis and Rothmayer (2003) showed that theFACADE model could explain the appearance of this kindof afterimage. Fig. 4 shows the model’s behavior inresponse to the sequence of inducing stimuli. The firstrow shows the model’s response at the end of the firstinducing stimulus (S1), a vertical bar grating presentedfor 1 s. The color gated dipoles accurately code the

1 2 3 4 5 6 7 8 9Response delay (seconds)

0

0.05

0.1

0.15

0.2

0.25

Afte

rimag

e st

reng

th

1 2 3 4 5 6 7 8 9

Response delay (seconds)

0

20

40

60

80

100

Per

cent

age

afte

rimag

e re

port

s

Attention on S1Yes No

Attentionon S2

YesNo

Attention on S1Yes No

Attentionon S2

YesNo

Fig. 5. Affects of attentional focus on S1 and S2 as a function of responsedelay after offset of S2. (a) Model predictions. See the text and appendixfor details of the afterimage strength calculation. (b) Results from theexperiment.

J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258 2253

stimulus colors. The BCS boundaries are a mass of vertical(black) signals, corresponding to the dense vertical bars ofthe grating. At the filling-in stage, the vertical boundariesseparate the black and white columns of the grating to pro-duce a veridical percept.

Row 2 of Fig. 4 shows similar model behavior for thesecond inducing stimulus (S2), a horizontal bar grating pre-sented for 1 s. Although there are also color and orienta-tion after-responses due to offset of S1, such small effectsare not visible in Fig. 4 when shown with the strongerresponses to S2.

The most interesting behavior is at offset of S2, which isshown in row 3 of Fig. 4. The color gated dipoles now showa mix of after-responses from both S1 and S2. There is amix of after-responses because the color gated dipolesoperate at a relatively slow rate. In contrast, the orientationgated dipoles operate at a faster time scale (Francis, Gross-berg, & Mingolla, 1994; Wede & Francis, 2006) and soshow only after-responses from S2. Since S2 generated hor-izontal orientation signals, the orientation after-responsesare among vertically tuned orientation cells. At the fill-ing-in stage, the vertical boundary signals allow color sig-nals to spread vertically, but prevent the signals fromspreading horizontally. As a result, the color gated dipoleafter-responses spread vertically and generate a visibleafterimage of a vertical bar grating. This result agrees withobserver’s reports.

An important characteristic of the two-sequenceafterimage is that the orientation after-responses from S2provide the boundary signals needed by the color after-responses from S1 to produce a filled-in afterimage percept.Unlike for a negative afterimage, the color after-responsesdo not need to overcome orientation after-responses andproduce their own boundaries before producing a filled-inafterimage percept. Francis and Rothmayer (2003) furthershowed that if S1 and S2 have the same orientation or if S2is a blank stimulus, then no afterimage is generated at thefilling-in stage of the model. This behavior was consistentwith experimental data on these afterimages.

We now explore the effect of attention to S1 and S2 onthe appearance of these afterimages. If the FACADEexplanation of the Suzuki and Grabowecky (2003) datais correct, and attention increases the strength of orienta-tion after-responses, then the model predicts differenteffects for attentional focus on S1 and S2 for the two-sequence afterimages. In particular, attentional focus onthe bar grating of S1 will lead to stronger orientationafter-responses, but because the orientation gated dipolesadapt and re-adapt fairly quickly, any such effects will notlast through the presentation of S2 to influence the visibil-ity of the afterimage. On the other hand, attentional focuson the bar grating of S2 will increase the strength of theoriented after-responses to S2. With stronger boundaries,the filling-in stage will better separate the spread of colorgated dipole signals in to vertical columns. Thus, theafterimage percept should be more visible when attentionis focused on S2.

Fig. 5a plots the strength of the model afterimage as afunction of the duration between offset of S2 and reportof the afterimage (response delay). The different curvesare for different conditions of attentional focus. Attentioncould be focused on both S1 and S2, S1 but not S2, S2but not S1, or neither S1 nor S2. For all conditions,increasing the delay between S2 and afterimage reportleads to a decrease in afterimage strength: the afterimagefades with time. Wede and Francis (2006) provided empir-ical support for the dynamics of the model. More impor-tantly, the model predicts different attentional effects forS1 and S2. When attention is focused on S2 (squares) theafterimage strength is stronger than when it was not (cir-cles), regardless of whether attention was focused on S1or not. The predicted magnitude of the attention effect onS2 varies with the response delay, with the biggest differ-ences occurring between 5 and 8 s. Compared to the effectson S2, any attention effects on S1 are quite small and donot vary much with the response delay.

So, the FACADE model explains why attentional focusweakens afterimage percepts for inducing stimuli like thoseused by Suzuki and Grabowecky (2003), but it also predicts

Time

(flickering)S2

Responsedelay

Mask

0

S1

2

Fig. 6. A schematic of one of the trials in the experiment.

2254 J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258

quite different attention effects for inducing stimuli likethose used by Francis and Rothmayer (2003). The next sec-tion describes an experiment that tests the model’spredictions.

3. Experimental test of model predictions

3.1. Method

3.1.1. Observers

One hundred and eighty naive undergraduates from theexperimental subject pool at Purdue University partici-pated for course credit. All reported normal or corrected-to-normal vision.

3.1.2. Stimuli

S1 was a vertical bar grating that consisted of 10 black(0.3 cd/m2) and 10 white (97 cd/m2) bars on a gray back-ground (19 cd/m2) that were presented to the subject for1 s. Each bar was 400 pixels long (approximately 6.7� ofvisual arc), and 20 pixels wide (approximately 20 minvisual arc). Attentional focus towards or away from S1was manipulated by a superimposed attention task. Fordrawing attention away from S1, a sequence of digits rang-ing from 0 to 4 were presented at the center of the grating.The digit was changed every 125 ms, with the constraintthat the same digit was not presented two times consecu-tively. The height of the digits was approximately 1.1� ofvisual arc and the width was approximately 54 min ofvisual arc. A total of eight digits was presented. To focusattention towards S1, one of the ten dark bars changedcolor every 250 ms, with the constraint that the same barwould not change color consecutively. The bar could bered (1.4 cd/m2), green (3.6 cd/m2), or blue (1.0 cd/m2)and the same color was not presented two times in a row.Four bars randomly changed color during the presentationof the first stimulus. These procedures are similar to themethods used by Suzuki and Grabowecky (2003) for draw-ing attention away from or toward an inducing stimulus.

S2 consisted of 10 black and 10 white horizontal bars.The grating flickered with its color complement every125 ms for a total of 1 s (this was also true in the simula-tions that generated Figs. 4 and 5a). Digits ranging from0 to 4 were presented in the middle of S2 and were ran-domly switched every 125 ms with the constraint that nodigit was presented twice in a row. One of the dark barsrandomly changed color every 250 ms. For example, ifthe fourth dark bar from the top changed to red, then whenthe grating flickered with its complement, which occurredevery 125 ms, the fourth dark bar from the top would stayred, for a total display time of 250 ms. The bar could bered, green or blue, with the same constraints as the firststimulus.

After a response delay of 2, 4, or 6 s, the screen showed amask of random dot noise that was the same size as S1 andS2. This mask was a cue for the observer to report anyafterimages. Fig. 6 schematizes the stimulus sequence for

one trial. The digit stream and color changing bars werepresent on every trial, but observers were given instructionsthat focused attention toward the inducing stimuli (trackthe color changing bars) or away from the inducing stimuli(track the digit stream).

Observers were seated 42 cm from the computer moni-tor. Each observer’s head was placed in a headrest to min-imize head movements. All stimuli were created anddisplayed with MATLAB, using the Psychophysics Tool-box extensions (Brainard, 1997; Pelli, 1997), on a PC run-ning Microsoft Windows XP with a monitor that refreshedat 85 Hz.

3.1.3. Procedure

Observers were instructed to pay attention to either thechanging numbers or the changing colors, or neither.Attentional focus toward S1 or S2 was controlled by hav-ing observers count the number of times they saw a greenbar on that stimulus. Attentional focus away from S1 orS2 was controlled by having observers count the numberof times they saw a zero appear among the stream of digitsfor that stimulus. Each observer was assigned to one of fiveconditions that varied the attentional focus tasks. In onecondition, the observer was given the tasks that focusedattention on both S1 and S2. In the second condition, theobserver was to focus attention toward S1 but away fromS2. In the third condition, the observer was to focus atten-tion away from S1 but toward S2. In the fourth condition,the observer was to focus attention away from both S1 andS2. In the fifth (control) condition, observers wereinstructed to ignore the changing digits and colors. In pilot

J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258 2255

work, observers found it difficult to switch their taskassignments, so the attentional tasks condition was set upas a between-subjects variable (with 36 observers in eachcondition). An observer viewed ten trials with each of thethree response delay durations for a total of thirty trials.The different trials were randomly ordered. Observersworked through the experiment at their own pace, with akey press starting each trial.

The appearance of the random dot mask marked theend of each trial. The observer then pressed a key to reporton any seen afterimage. The observer was given threechoices: ‘‘nothing’’, ‘‘vertical’’, or ‘‘horizontal’’ and weretold to report what they saw at the moment right beforethe mask appeared. Observers were told to report ‘‘noth-ing’’ only if they saw no afterimage at all. If they sawany afterimage, they were to choose whether it was bestdescribed as vertical or horizontal.

After entering a description of any afterimage, theobserver was asked to enter the number of times theysaw zeros or green lines, according to their attention tasks.The observer was always to report on the task associatedwith S1 and then to report on the task associated withS2. In the control condition, these questions were notasked. A minimum 15 s delay was introduced between tri-als to reduce any cross-trial effects. The experiment wasself-paced and took about 20 min to complete.

At the start of the experiment, the observer completed 6practice trials, with feedback given on the actual number ofzeros and/or green lines, as appropriate for the attentiontasks. There was no feedback given in the control conditionor for any of the experimental trials.

3.2. Results

Before describing the afterimage reports, we brieflysummarize performance on the attention tasks. Acrossall conditions and stimuli, observers were 78% correctat reporting the number of zeros in the digit streamand 84% correct at reporting the number of green lineson the bar gratings. The results from an ANOVA showno differences in percentage correct between the attentiontask conditions, F(1, 140) = 1.84, p = N.S. Observers’counts were off by more than two on 3.2% and 2.4%of the trials for the digit sequence and color sequencetasks, respectively.

Overall, performance on the attention tasks suggeststhat the tasks were challenging but not so difficult as tobe frustrating for observers to complete. It remains possi-ble that attentional focus was misplaced on some trials,but this was apparently rare if it occurred at all. In the fol-lowing summary of the results, we included all trialsregardless of whether or not the observer was correct onthe attention tasks. In a separate analysis, we excluded alltrials where an observer’s count was off by more thantwo on either of the attention tasks. This exclusion didnot substantively change any of the results and did notchange any of the statistical conclusions.

Reports of afterimages were almost always either ‘‘verti-cal’’ or ‘‘nothing’’. Reports of a horizontal afterimage weregiven on only 3.5% of the trials across all conditions. Therewere no significant differences in reports of seeing a hori-zontal afterimage across the attention task conditions orresponse delay durations.

Fig. 5b plots the percentage of trials where observersreported seeing a vertical afterimage against response delayduration. Errors bars indicate plus and minus one standarderror. The curves are labeled as in Fig. 5a with conditionsthat directed attentional focus toward S2 being drawn withsquare icons and conditions that directed attentional focusaway from S2 being drawn with circle icons. Likewise, con-ditions that directed attentional focus to S1 are drawn withopen icons, while conditions that directed attentional focusaway from S1 are drawn with filled icons. The control con-dition (with no explicit direction of attentional focus) isdrawn with filled triangle icons.

All of the model predictions are validated by the empir-ical data. Reports of the afterimage decreased as responsedelay duration increased (F(2,175) = 174.00, p 6 0.0001).There was also a significant difference across the atten-tional conditions (F(4,175) = 3.933, p 6 0.001). The inter-action between condition and response delay durationwas also significant (F(4,2,173) = 3.680, p 6 = 0.002), ascan be seen by the different slopes of the lines in Fig. 5b.

A 2 · 2 ANOVA investigating differences between atten-tional focus toward or away from S1 and toward or awayfrom S2 showed a significant effect of attentional focus forS2 (F(1,140) = 10.147, p 6 = 0.001) but not for S1(F(1,140) = .142, p = N.S). The interaction was not signif-icant, (F(1,1,140) = .771, p = N.S). All of these conclu-sions can be summarized as in agreement with the modelpredictions. Attentional focus on S2 increases the probabil-ity of observers seeing an afterimage compared to whenattentional focus is away from S2. In contrast, attentionalfocus towards or away from S1 has little or no effect on theprobability of seeing an afterimage.

A more global view of the relationship between themodel and empirical data is evident by noting that acrossattention tasks and response delay durations, the correla-tion coefficient between the model and data is r = 0.97.

4. Conclusions

Our analysis of the FACADE model explains why atten-tional focus to an inducing stimulus weakens the resultingnegative afterimage (Suzuki & Grabowecky, 2003). Themodel proposes that there are two kinds of after-responsesin the visual system: one for color/brightness and one fororientation. A negative afterimage can appear only whenthe color after-responses are strong enough to generateboundary signals that support the afterimage representa-tion at the model filling-in stage. The orientation after-responses cannot support the afterimage representation inthe filling-in stage, so the afterimage can appear only afterthe orientation after-responses fade. Attentional focus on

2256 J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258

the inducing stimulus strengthens the orientation after-responses so the afterimage appearance is delayed andweaker.

Our explanation elaborates and clarifies the explanationgiven by Suzuki and Grabowecky (2003). They alsohypothesized several types of interacting neural after-responses, but they did not explain why or how oneafter-response would interfere with the other. Our explana-tion is also consistent with other modeling of attentionaleffects within the FACADE family of models (Raizada &Grossberg, 2001, 2003) where attention has its direct influ-ence on the representation of orientationally tuned cells.

Further analysis of the model revealed that it predictedan opposite effect of attentional focus for the inducingstimuli used by Francis and Rothmayer (2003). Here thecolor after-responses of the first inducer can be supportedby the orientation after-responses of the second inducer.Attentional focus on S2 increases the strength of thesefacilitary orientation after-responses, thereby increasingthe strength of the afterimage appearance. The results ofthe experimental study matched the predicted pattern ofresults quite well.

Overall, the model analysis and experimental resultsprovide further evidence for the idea that a primary effectof attention is to strengthen representations of informationin polarity-independent orientation-sensitive pathways.The analysis and experimental results also support thebasic principles of FACADE theory that the visual systeminvolves separate pathways for color/brightness and orien-tation that interact at a filling-in stage.

Appendix A. Simulations

A.1. Input

Each pixel (i, j) had an input value Iij. All images usedintensities between �1 (black) and +1 (white) with 0 indi-cating middle gray. A 128 by 128 pixel plane was used inall simulated images. Each bar grating image was 88 by88 pixels on a gray background. The thickness of eachblack or white bar was 2 pixels. The first (vertical) and sec-ond (horizontal) stimuli were presented for one simulatedtime unit (equivalent to 1 s) each. The second grating flick-ered (black and white values changed places) eight times.

A.2. Color gated dipole

The input value Iij fed into the habituating gate of thecolor gated dipole. Black and white signals were sentthrough competing channels and habituation of the gateoccurred. The signals were calculated pixel by pixel andthe calculation for the habituating gate of the white chan-nel at pixel (i, j), gij, obeyed the differential equation:

dgij

dt¼ ½A� Bgij � Cgijð½I ij�þ þ JÞ�D ð1Þ

The term A � Bgij allows the gate to increase to the valueA/B. The last term describes how the habituating gate is de-pleted by a tonic signal J and by the white input signal,[Iij]

+. The notation []+ indicates that any values below zeroare set to zero. Parameter D controls the overall rate ofchange. The black opponent pathway was identical exceptthat [Iij]

+ was changed to [�Iij]+ so only black signals

would pass through the gate. The initial values of the gatethat correspond to an equilibrium value of the gates withno outside input are:

gijð0Þ ¼ Gijð0Þ ¼A

Bþ CJð2Þ

where gij denotes the white gate and Gij denotes the blackgate. The parameters were set as A = 2.0, B = 0.9, C = 1,D = 0.01, and J = 5.0.

The output of the white color gated dipole was calcu-lated by multiplying the total input into the white channelby the habituating gate and subtracting the same valuecomputed for the black input. This difference was threshol-ded and multiplied by a scaling factor. The white output,wij, was computed as:

wij ¼ E½ð½I ij�þ þ JÞgij � ð½�I ij�þ þ JÞGij � F �þ ð3Þ

Here, F = 0.0004 is a threshold. Any negative values are setto zero. After the difference is rectified, it is scaled by themultiplying term E = 100. The output of the black gateddipole, bij, was similar except that the middle terms tradeexcitatory and inhibitory roles. The value wij � bij was plot-ted in the second column of Figs. 2 and 4.

A.3. Boundary contour system

A.3.1. Edge detection

The outputs from the color gated dipoles, wij and bij

were sent to the BCS for edge detection. Detectors lookedfor changes in luminance intensity in a vertical or horizon-tal direction. The response of the detector was defined asthe absolute value of this change. Thus a boundary cellat position (i, j) tuned to a vertical edge had an activity:

yij ¼ ½jwij � wi�1;jj þ jwij � wiþ1;jj þ jbij � bi�1;jj þ jbij

� biþ1;jj � K�þ ð4Þ

This receptive field looks to the left and right of the edgelocation for any differences in color between itself and itsneighbor, indicating a vertical edge is present. The termK = 4 indicates a threshold. Any values below K were setto zero. A similar value, Yij was computed for the horizon-tally tuned boundary cells.

A.3.2. Attention

Attention was modeled by multiplying the output of theedge detectors. This increased signal fed into the orientedgated dipole of the BCS. When attention was focusedtoward a stimulus, the responses from the edge detectorswere multiplied by 2.

J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258 2257

A.3.3. Orientation gated dipole

The equations for the BCS oriented gated dipole had thesame form as those used for the color gated dipole. The dif-ference between the equations is that the input [�Iij] in thecolor gated dipole is replaced by the vertical edge output,yij. The parameter values for the orientation gated dipolesare A = 1.0, B = 5, D = 0.02, J = 10, E = 10, and F = 8.The equation for the habituating gate for the verticallytuned cell is:

dgij

dt¼ ½A� Bgij � Cgijðyij þ JÞ�D ð5Þ

The output for a vertically tuned orientation gated dipolefollowed the equation:

xij ¼ E½ðyij þ JÞgij � ðY ij þ JÞGij � F �þ ð6Þ

Here, xij refers to the output of the orientation gated dipolefor a vertically tuned cell, yij is the response from a verticaledge detector, and Yij is the response from a horizontaledge detector.

The differential equations for the color and orientationhabituating gate were solved simultaneously since the out-puts from the color gated dipole feed into the orientationdetectors and the output from the orientation detectorsfeed into the habituative gates for the orientation gateddipole. The equations were solved with Euler’s methodwith a step size of 0.01 time units.

A.3.4. Boundary grouping

Signals in the BCS were grouped by bipole cells thatreceive excitation from cells with the same orientationand inhibition from cells with the orthogonal orientation.A vertically tuned bipole cell received excitation from ver-tically tuned gated dipole cells and received inhibition fromhorizontally tuned gated dipole cells. A vertical bipole cellhad two sides (Up and Down) that summed informationfrom locations above (Up) or below (Down) the bipole celllocation. Intermediate terms are defined as follows:

Upij ¼XM

k¼0

ðxi;j�k � X i;j�kÞ" #þ

ð7Þ

and

Downij ¼XM

k¼0

ðxi;jþk � X i;jþkÞ" #þ

ð8Þ

where xij and Xij refer to the output of the orientation gateddipole for vertically and horizontally tuned cells, respec-tively. The number of cells a bipole cell combines in eachdirection is M = 10. A bipole cell has positive activity aslong as the two intermediate terms are greater than zero,or if the bottom-up edge detection information at the bi-pole location and one of the intermediate terms are non-zero. If two of the three inputs feeding into the bipole cellare zero, then the activity of the bipole cell is set equal to

zero. If two of the three inputs have a positive value thenactivity of the vertical bipole cell at pixel (i, j) is:

Bij ¼ Upij þDownij þ xij: ð9Þ

If at least two of the three inputs are not positive, the valueBij is set equal to zero. The equations for the horizontallytuned bipole cell are defined similarly. The horizontal bi-pole cell receives excitation from other horizontally tunedcells within its reach and inhibition from vertically tunedcells within its reach. Raizada and Grossberg (2001,2003) describe how these sorts of calculations can be com-puted among laminar circuits of visual cortex.

A winner-take-all competition across orientation wasincluded at each pixel location. The orientation bipole cellwith the largest value remained positive while the bipole cellof the other orientation was set to zero. The value of each ori-entation bipole cell at each pixel location was the output ofthe BCS and was plotted in the third column of Figs. 2 and4. A more elaborate version of this type of competition canbe found in Grossberg, Mingolla, and Ross (1997).

A.4. Brightness filling-in

Filling-in of brightness information used the methoddescribed by Francis and Rothmayer (2003). The regionsthat are fully connected by boundary signals were computedand the brightness value of each pixel in the region, Si,j wasset to be the average of the wij � bij inputs of the fully con-nected region. The strength of the filled-in afterimage (ifany) was measured as the average absolute value of Si,j acrossthe entire image plane at the moment of the cue to report.

A variable threshold was introduced to the boundaries.The threshold was added in the same manner as in Wedeand Francis (2006). The boundary values that were usedto identify separate regions in the filling-in stage were:�V ij � T

�þ ð10Þ

and�H ij � T

�þ ð11Þ

for vertical and horizontal signals, respectively. The thresh-old parameter T was varied systematically from the value2.0 to the value 6.0 in steps of 0.4. For each threshold va-lue, the afterimage strength after filling-in was measuredand the average afterimage strength for all threshold valueswas computed. The average afterimage strength is reportedin Figs. 3 and 5a.

References

Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10,433–436.

Burbeck, C. A. (1986). Negative afterimages and photopic luminanceadaptation in human vision. Journal of the Optical Society of America

A, 3, 1159–1165.Burbeck, C. A., & Kelly, D. H. (1984). Role of local adaptation in the

fading of stabilized images. Journal of the Optical Society of America

A, 1, 216–220.

2258 J. Wede, G. Francis / Vision Research 47 (2007) 2249–2258

Francis, G. (1997). Cortical dynamics of lateral inhibition: Metacontrastmasking. Psychological Review, 104, 572–594.

Francis, G., Grossberg, S., & Mingolla, E. (1994). Cortical dynamics offeature binding and reset: Control of visual persistence. Vision

Research, 34, 1089–1104.Francis, G., & Rothmayer, M. (2003). Interactions of afterimages for

orientation and color: Experimental data and model simulations.Perception & Psychophysics, 65, 508–522.

Grossberg, S. (1972). A neural theory of punishment and avoidance: II.Quantitative theory. Mathematical Biosciences, 15, 253–285.

Grossberg, S. (1994). 3-D vision and figure-ground separation by visualcortex. Perception and Psychophysics, 55, 48–120.

Grossberg, S. (1997). Cortical dynamics of three-dimensional figure-ground perception of two-dimensional figures. Psychological Review,

104, 618–658.Grossberg, S., & Hong, S. (2006). A neural model of surface perception:

Lightness, anchoring, and filling-in. Spatial Vision, 19, 263–321.Grossberg, S., & Howe, P. D. L. (2003). A laminar cortical model of

stereopsis and three-dimensional surface perception. Vision Research,

43, 801–829.Grossberg, S., & Mingolla, E. (1985a). Neural dynamics of form

perception: Boundary completion, illusory figures, and neon colorspreading. Psychological Review, 92, 173–211.

Grossberg, S., & Mingolla, E. (1985b). Neural dynamics of perceptualgrouping: Textures, boundaries, and emergent segmentations. Percep-

tion & Psychophysics, 38, 141–171.Grossberg, S., Mingolla, E., & Ross, W. D. (1997). Visual brain and visual

perception: How does the cortex do perceptual grouping? Trends in

Neurosciences, 20, 106–111.Grossberg, S., & Yazdanbakhsh, A. (2005). Laminar cortical dynamics of

3D surface perception: stratification, transparency, and neon colorspreading. Vision Research, 45, 1725–1743.

Loomis, J. M. (1978). Complementary afterimages and the unequaladapting effects of steady and flickering light. Journal of the Optical

Society of America, 68, 411–416.

Lou, L. (2001). Effects of voluntary attention on structured afterimages.Perception, 30, 1439–1448.

Lou, L., & Chen, J. (2005). Effects of voluntary attention on negativeafterimages during and after stimulus adaptation. Perception, 34, 179.

McAdams, C. J., & Reid, C. (2005). Attention modulates the responses ofsimple cells in monkey primary visual cortex. Journal of Neuroscience,

25(47), 11023–11033.Motter, B. (1993). Focal attention produces spatially selective processing

in visual cortical areas V1, V2, and V4 in the presence of competingstimuli. Journal of Neurophysiology, 70(3), 909–919.

Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics:Transforming numbers into movies. Spatial Vision, 10, 437–442.

Raizada, R., & Grossberg, S. (2001). Context-sensitive bindings by thelaminar circuits of V1 and V2: A unified model of perceptual grouping,attention, and orientation contrast. Visual Cognition, 8, 431–466.

Raizada, R., & Grossberg, S. (2003). Towards a theory of the laminararchitecture of cerebral cortex: Computational clues from the visualsystem. Cerebral Cortex, 13, 100–113.

Shimojo, S., Kamitani, Y., & Nishida, S. (2001). Afterimage of percep-tually filled-in surface. Science, 293, 1677–1680.

Suzuki, S., & Grabowecky, M. (2003). Attention during adaptationweakens negative afterimages. Journal of Experimental Psychology:

Human Perception and Performance, 29, 793–807.Tsuchiya, N. (2006). Attention and Awareness: Visual Psychophysics and

Aversive Conditioning in Humans. Ph.D. Thesis, California Instituteof Technology, Pasadena, California.

Vidyasagar, T. R. (1998). Gating of neuronal responses in macaqueprimary visual cortex by an attentional spotlight. NeuroReport, 9,1947–1952.

Vidyasagar, T. R., Buzas, P., Kisyarday, Z. F., & Eysel, U. T. (1999).Release from inhibition reveals the visual past. Nature, 399, 422–423.

Virsu, V., & Laurinen, P. (1977). Long-lasting afterimages caused byneural adaptation. Vision Research, 17, 853–860.

Wede, J., & Francis, G. (2006). The time course of visual afterimages:Data and theory. Perception, 35, 1155–1170.