16
Shifts of Focus on Various Aspects of User Information Problems During Interactive Information Retrieval David Robins Louisiana State University, School of Library & Information Science, 267 Coates Hall, Baton Rouge, LA 70803. E-mail: [email protected]) The author presents the results of additional analyses of shifts of focus in IR interaction. Results indicate that users and search intermediaries work toward search goals in nonlinear fashion. Twenty interactions between 20 different users and one of four different search inter- mediaries were examined. Analysis of discourse be- tween the two parties during interactive information re- trieval (IR) shows changes in topic occurs, on average, every seven utterances. These twenty interactions in- cluded some 9,858 utterances and 1,439 foci. Utterances are defined as any uninterrupted sound, statement, ges- ture, etc., made by a participant in the discourse dyad. These utterances are segmented by the researcher ac- cording to their inentional focus, i.e., the topic on which the conversation between the user and search interme- diary focus until the focus changes (i.e., shifts of focus). In all but two of the 20 interactions, the search interme- diary initiated a majority of shifts of focus. Six focus categories were observed. These were foci dealing with: documents; evaluation of search results; search strate- gies; IR system; topic of the search; and information about the user. Introduction This study has a twofold purpose. First, it seeks to increase understanding of information problems as they are revealed in interactions with an information system. Sec- ond, it seeks to provide data on a distinct aspect of such interactions, namely, how users and human search interme- diaries change (i.e., shift) their focus of conversation during interactions. Specifically, this study seeks to investigate: (a) how interaction between users and search intermediaries reveals aspects of user information problems; and (b) how users and search intermediaries focus on aspects of user information problems during the course of on-line searches. Research Problem This research was undertaken to further describe pro- cesses in information retrieval (IR) that involve human beings. It is generally agreed that, while some progress in understanding human interaction with information retrieval systems has been made, more research is needed (Saracevic, 1997b). Saracevic (1997a) notes the importance of interac- tion studies within the field of information retrieval with reference to user modeling (as derived from interaction studies) as: (i) an interactive process that (ii) proceeds in a dynamic way at different levels trying (iii) to capture user’s cognitive, situational, affective and possibly other elements (variables) that bear upon effectiveness of retrieval, (iv) with an influ- ence of intermediary interface capabilities, and (v) with an interplay with “computer” levels. (p. 321). The problems associated with interactive information retrieval are multifaceted and complex. They involve re- search fronts that may include communications, cognitive science, sociology, library science, information science, and others. The research presented in this article focuses on human interaction during the process iterative attempts to retrieve information. The facets of interactive information retrieval studied here are: (1) information problems and their components, and (2) shifts of focus during mediated IR interaction among information problem components. The first part of the research problem seeks a better understanding of the nature of user information problems. In other words, one of the problems associated with under- standing interactive information retrieval specifically, and user behavior generally, is the problem of motivation. We need to better understand what people expect from a given interaction with an information retrieval system. If we can find patterns among user information problems (e.g., whether timeliness sources is more important than other factors, or whether topicality is more important than how easy it is to retrieve full text), then we know more about what motivates a user to select certain documents over others. Certainly, such knowledge might be meaningful when system designers are developing ranking algorithms for retrieval output. © 2000 John Wiley & Sons, Inc. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 51(10):913–928, 2000

Shifts of focus on various aspects of user information problems during interactive information retrieval

Embed Size (px)

Citation preview

Shifts of Focus on Various Aspects of User InformationProblems During Interactive Information Retrieval

David RobinsLouisiana State University, School of Library & Information Science, 267 Coates Hall, Baton Rouge, LA 70803.E-mail: [email protected])

The author presents the results of additional analyses ofshifts of focus in IR interaction. Results indicate thatusers and search intermediaries work toward searchgoals in nonlinear fashion. Twenty interactions between20 different users and one of four different search inter-mediaries were examined. Analysis of discourse be-tween the two parties during interactive information re-trieval (IR) shows changes in topic occurs, on average,every seven utterances. These twenty interactions in-cluded some 9,858 utterances and 1,439 foci. Utterancesare defined as any uninterrupted sound, statement, ges-ture, etc., made by a participant in the discourse dyad.These utterances are segmented by the researcher ac-cording to their inentional focus, i.e., the topic on whichthe conversation between the user and search interme-diary focus until the focus changes (i.e., shifts of focus).In all but two of the 20 interactions, the search interme-diary initiated a majority of shifts of focus. Six focuscategories were observed. These were foci dealing with:documents; evaluation of search results; search strate-gies; IR system; topic of the search; and informationabout the user.

Introduction

This study has a twofold purpose. First, it seeks toincrease understanding of information problems as they arerevealed in interactions with an information system. Sec-ond, it seeks to provide data on a distinct aspect of suchinteractions, namely, how users and human search interme-diaries change (i.e., shift) their focus of conversation duringinteractions. Specifically, this study seeks to investigate: (a)how interaction between users and search intermediariesreveals aspects of user information problems; and (b) howusers and search intermediaries focus on aspects of userinformation problems during the course of on-line searches.

Research Problem

This research was undertaken to further describe pro-cesses in information retrieval (IR) that involve human

beings. It is generally agreed that, while some progress inunderstanding human interaction with information retrievalsystems has been made, more research is needed (Saracevic,1997b). Saracevic (1997a) notes the importance of interac-tion studies within the field of information retrieval withreference to user modeling (as derived from interactionstudies) as:

(i) an interactive process that (ii) proceeds in a dynamic wayat different levels trying (iii) to capture user’s cognitive,situational, affective and possibly other elements (variables)that bear upon effectiveness of retrieval, (iv) with an influ-ence of intermediary interface capabilities, and (v) with aninterplay with “computer” levels. (p. 321).

The problems associated with interactive informationretrieval are multifaceted and complex. They involve re-search fronts that may include communications, cognitivescience, sociology, library science, information science, andothers. The research presented in this article focuses onhuman interaction during the process iterative attempts toretrieve information. The facets of interactive informationretrieval studied here are: (1) information problems andtheir components, and (2) shifts of focus during mediated IRinteraction among information problem components.

The first part of the research problem seeks a betterunderstanding of the nature of user information problems. Inother words, one of the problems associated with under-standing interactive information retrieval specifically, anduser behavior generally, is the problem of motivation. Weneed to better understand what people expect from a giveninteraction with an information retrieval system. If we canfind patterns among user information problems (e.g.,whether timeliness sources is more important than otherfactors, or whether topicality is more important than howeasy it is to retrieve full text), then we know more aboutwhat motivates a user to select certain documents overothers. Certainly, such knowledge might be meaningfulwhen system designers are developing ranking algorithmsfor retrieval output.© 2000 John Wiley & Sons, Inc.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 51(10):913–928, 2000

The second item above contains the term, dynamic, andthe dynamism of interaction with IR systems is the primaryfocus of this study. Only a few studies (notably, Belkin,Brooks & Daniels, 1987; Brooks, 1986) have begun to lookat the dynamic nature of interaction with IR systems, andfewer still have related them to user information problems.In other words, as users work through an interaction with anIR system, they generally do so using queries and reformu-lations of queries in an iterative fashion. There are manypossible reasons for query reformulations. It may be that thelist of retrieved documents was unsatisfactory with respectto topic. It may be that the scope of the documents wasunsatisfactory. Perhaps the user is approaching the IR sys-tem without a firm understanding of his/her informationproblem, and cannot make judgements regarding the use-fulness of the retrieved sets of documents. The problem,then, becomes, “How do we know why someone is refor-mulating queries?” or “What aspects of user informationproblems are people discussing as they work through an IRinteraction.” This study investigates the problem of whatpeople discuss during IR interaction to develop categoriesof topics of discussion, and how participants (users andsearch intermediaries) shift focus among those categories.The knowledge provided by this aspect of the study shouldprovide information for IR system design, in particular, forend-user expert systems aimed at stepping academic re-searchers through a research problem. The two items enu-merated above are discussed in the following sections.

Importance of the Problem

An understanding of the ways users, search intermediar-ies, and information retrieval systems treat user informationproblems is critical to IR research. First, the structure ofnaturally occurring discourse provides clues to cognitiveactivities of participants during interactive IR. As users andsearch intermediaries carry on a dialog during a search, theyoften tell us what they are thinking, what goals they hope toachieve in the search or at various stages during the search.They reveal information about what they consider importantin a search. They share strategic information with each otherrelating to reasons behind certain actions. Such informationcan be used to construct knowledge bases for intelligentintermediaries. Systems for various applications have beendeveloped that employ Baysean probability to solve prob-lems in specific domains. Similar systems could be devel-oped to deliver information to users based on complexprofiles of their activities. In such a scenario, informationwould be sent “upstream” without the need for active userqueries.

In addition, this study represents basic research intophenomena related to interactive IR. It addresses the con-cepts ofinformation problemand focus shift. Research ofthis nature assists our understanding of these base concep-tual issues.

Finally, this study begs the question of whether mediatedinformation searching should be studied in a time when

end-user searching is quite common in environments, suchas the World Wide Web. There are at least two reasons tostudy mediated searching. First, is the reason already men-tioned: that dialog between a user and a search intermediaryreveals information about procedural and cognitive pro-cesses associated with information searches. This type ofinformation is not as readily available when researchersobserve a single end-user interacting with an IR system. Ofcourse, there are advantages and disadvantages to researchin either case, but the natural dialog that occurs in mediatedsearching is a definite advantage to understanding. In otherwords, modeling processes requires an understanding of theprocesses to be modeled. The second reason for studyingmediated IR is that there remains, and will in all likelihood,will always remain, mediated searches. Not all end-usersfeel confident using modern IR systems, and reference li-brarians still assist users. Furthermore, we might argue thatall searching is mediated by the assumptions of IR systemdesigners, and other constraints of cognitive and socialorigins that impact the outcome of searches. Human searchintermediaries represent one type of mediator, one that takesan active and verbal role in searches, thus providing uniqueinformation about the search process.

In the following section, I present a discussion of re-search related to the present study, after which, I willdescribe the structure of this study, and present the resultsand discussion before concluding.

Related Literature

Information retrieval is a dynamic, interactive process. Itis possible to assume that users, interacting with searchintermediaries will focus on different dimensions of infor-mation problems during the course of any given interaction.Specifically, their interaction will focus on various aspectsof the information problem at hand at various points in theinteraction timeline. In short, participants’ interaction maybe characterized by shifting among dimensions of informa-tion problems. Belkin and others (e.g., Belkin et al., 1987)took up this problem in the 1980s, and this study seeks toextend such efforts, and to extend prior work by the author(Robins, 1997, 1998a, 1998b).

This study seeks to increase understanding of: (1) userinformation problems as a multifaceted phenomenon; (2) IRinteraction; and (3) shift focus among facets of user infor-mation problems by participants in IR interaction. In thissection, I provide background on these three points to act asa foundation for the research shown in later sections.

Information Problems and their Components

Very little is known or agreed upon regarding informa-tion problems and how they are expressed in interactive IRsituations. Furthermore, the concept ofinformation problemis not well defined with respect to its meaning (i.e., “Whatis an information problem?”), and scope (i.e., “Should anotion of information problem be limited to the topical

914 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

elements of searching, or should nontopical notions beincluded?”).

User information problem has a very broad interpretationin this study. Rather than limit the definition to topicalmanifestations only, this study seeks to analyze all factorsrelated to the user’s motivations, constraints, present knowl-edge of the domain and topic, or uncertainties related to anyaspect of the problem at hand. Researchers use differentterms to describe users’ reasons for approaching informa-tion systems, for example, “information need” (Ingwersen,1996; Taylor, 1968), “information problem,” or simply“problem” (Belkin, Seeger & Wersig, 1983; Wersig &Windel, 1985), an awareness of some sort of “gap” in oneslife-path (Dervin, 1983a), an “anomalous state of knowl-edge” (Belkin, 1980; Belkin, Oddy & Brooks, 1982), “dy-namic cognitive states” (Harter, 1992), and “goals” (Hert,1996). This broader interpretation of information problem isalso accounted for by Ingwersen (1992, 1996), Belkin andVickery (1985), and in other studies regarding criteria forrelevance judgments (Barry, 1994; Boyce, 1982; Schamber,Eisenberg & Nilan, 1991).

The main difficulty with a broader view of informationproblem is the complexity of factors influencing IR inter-action. Conceivably, any life event experienced by the usersince birth has an influence on his/her present situation.Therefore, a researcher must draw some boundaries arounda user’s sphere of influence. It is difficult to draw appropri-ate boundaries a priori. Each situation has unique propertiesthat, when considered within a context, may determineappropriate boundaries. Certainly, it is difficult to categorizeall potential user behaviors in advance. Inductive methodsallow one to observe patterns of behavior that describe suchtopical and nontopical categories.

An example may help to clarify the problems associatedwith a holistic view of information problem. Referencelibrarians who interact with as many as 10 to 20 (or more)clients per hour make quick assessments of informationneed. If a client asks where she may find information aboutcultural norms related to doing business in Japan, the librar-ian might probe further to learn the context in which theclient seeks such information. The librarian might use dif-ferent search strategies depending on the social context ofthe search. For instance, if the client is to be transferred toJapan by her company, one type of strategy may be used. If,on the other hand, she will be presenting something on thetopic at a local club meeting, another approach may betaken. One hypothesis might be that the former social en-vironment of the user demands a more exhaustive searchthan the latter. In short, social context influences informa-tion seeking behavior by users and search intermediariesalike.

Social contexts are not the only nontopical influence onsearching. Information systems and texts are two others.Information systems are the means by which users gainaccess to texts.Text, in this case, is defined as any artifactof intellectual activity (books, papers, photographs, paint-ings, data, Web pages, etc.) sought or unknown that have

the potential of being accessed by an information system.Information systemsmay be indexes or abstracts (in paper,CD-ROM, on-line, etc.), search engines, listservs, data-bases, etc. Both of these elements of information problemrepresent major parts of the milieu in which users seekinformation.

In this article, the term “information problem” is used sothat a more complete view of the user may be included.“Information need” is too restrictive for the purposes of thisstudy because it implies that there is specific information forwhich a user has a need, and for which s/he is seeking.Using the term “information need” would then restrict anygiven situation to that which the information sought isknown to exist within a well-defined context (Belkin &Vickery, 1985). By “information problem,” a broader rangeof factors affecting search strategy, question negotiation,query formulation, relevance judgments, social factors,project environment, information use and seeking behavior,etc., are implied. This conception of information problem issimilar to Borland and Ingwersen’s (1997) notion of “prob-lem situation” and “work task,” in problem solving activi-ties.

However, information need is used in this study as asubset of the larger information problem. That is, users oreven search intermediaries, may be able to express con-cretely what types of information they wish to retrieve froman IR system. Certainly, this type of expression is one of“information need.” All of the factors behind such an ex-pression are still there exerting influence, but at that partic-ular moment, an expression of information need takes place.

There is considerable conceptual overlap in some of theterminology used in information science in general, andwith regard to representation in particular. Broadly defined,representationis any mental or physical token or descrip-tion of mental or physical objects in the world. Another termthat appears in the literature (e.g., Allen, 1991) with regardto cognitive structures ismodeling. Model is any method orform of representing a problem or concept so that it may bemore easily understood. Examples that can be found in theinformation science literature include mental models, usermodels, concept models, and process models (Allen). Men-tal models and user models are cognitively based, and aretheorized tools used by people, expert systems, and artificialintelligence to reduce a complex problem to manageablesize and minimal cognitive load. A discussion of mentalmodels and user models is included in the next section.However, models may be thought of as specific manifesta-tions of representation. In any case, for a researcher to sensea user’s representation of an information problem, s/he mustdo so via inference from user statements regarding an in-formation problem.

Users may state their problems by various means. Inon-line searching, for example, a user may be asked to statehis or her information need on a form. The amount of detailthey are asked to provide on presearch forms varies. In somecases, users may be asked to provide details about theproject on which s/he is working (e.g., dissertation, techni-

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 915

cal report, etc.). If the information provided is extensive, asearch intermediary may be provided with a rich set ofcontext cues about the user. In other situations, such assearching via World Wide Web search engines, the onlystatement of a user’s information problem is provided by thequery terms entered. In such a case, very little context abouta user can be established. In other words, more informationabout a user’s information problem, all other things beingequal, gives an IR system (or search intermediary) a morecomplete representation of the user’s information problem.

However, problem representation is more complex. Auser’s conception of his/her information problem may bemore or less concrete. A user’s ability to communicatedirectly to another his/her information problem will depend,largely, on how well formed a user’s conception of thatproblem is (Belkin, 1980; Ingwersen, 1996). (Other factorssuch as poor communication skills on the part of the user orsearch intermediary may impede communication and under-standing, as well). In other words, the quality of any infor-mation problem statement by a user will depend on theuser’s level of problem awareness. However, if a particularinformation problem is complex, explicit statements thatfully describe it may be difficult or impossible. With suchcomplex information problems, it may be necessary to enterinto a more involved dialog with an IR system and/or asearch intermediary. That is, the parties involved must learnmore about each other to make the search more effective.The search intermediary must learn more about the user’sproblem. Likewise, the user must learn more about the IRsystem’s capabilities and limitations. In short, complexproblems may require movement away from a researchprogram that assumes linear, rational processes.

Whatever we consider a valid construct of informationproblems, we might agree that information problems pro-vide the basis for our interaction with information, texts,and information retrieval systems. That is, if we are awareof something that we would like to know, we might engagesomething or someone to find it. The importance of infor-mation problems to interactive IR is that information prob-lems provide a guide by which we proceed in a search, andthey provide a means by which we measure the success ofour searches.

Interactive Information Retrieval

The purposes of studying interactive IR are, among manythings: (1) to understand the ways people iteratively searchIR systems; (2) to model user and search intermediarybehaviors; (3) to use models of information behaviors todesign automated intermediary devices; (4) to better trainprofessional information intermediaries; and (5) to gain afundamental understanding of IR processes.

Toward these ends, research on interactive IR processeshas proceeded for approximately 2 decades.

Belkin (1984; Belkin et al., 1987) studied a real (i.e.,naturalistic) reference interview prior to on-line searches tobuild a model of interactive IR that could be automated. He

and his colleagues derived the following categories of be-haviors, or functions, exhibited by search intermediariesattempting to elicit information from users. These functionswere intended to: (1) reveal information about a user’sproblems state; (2) determine how/where to obtain docu-ments; (3) generate a general model of the user; (4) generatea description of the user’s information problem; (5) deter-mine how the IR system should carry on a dialog with theuser; (6) develop search strategy; (7) develop responses tothe user’s query; (8) explain system features to the user; (9)analyze input to translate user’s request into equivalentsusable by the system; and (10) be able to appropriatelyconvert system responses to a usable format for the user.Although Belkin did not study interaction after the point atwhich the actual search occurred, this work is a pioneeringeffort to understand and model intermediary functions.

Saracevic and Kantor (1988) studied the cognitive prop-erties of information seeking as revealed in a naturalisticsetting. They describe five variables in interactive IR: users;questions; searchers; searches; and retrieved items. Theycorrelated (a) standardized tests of users’ and searchers’cognitive styles/abilities to (b) search results to (c) be ableto predict the outcome of searches given certain baselineconditions. In other words, they wanted to know more abouthow people of various styles and abilities went aboutsearching for and evaluating information.

Fidel (1985) described on-line searching in terms of“moves,” thus recognizing its highly dynamic nature. Shecategorized these moves asoperationalandconceptual. Herstudy began with the assumption that most of the activityduring on-line searching is centered around manipulatingretrieved sets. These sets, once retrieved, are the buildingblocks for subsequent query reformulations, or moves. Op-erational moves result from the intent to modify retrievedsets without changing the meaning represented by the sets,for example, by reducing or enlarging the sets as needed. Onthe other hand, conceptual moves result from the attempt tomodify retrieved sets by changing the language used togenerate sets. This may be accomplished by introducing adifferent descriptor a set, or by qualifying descriptors usingrole indicators, for example.

Other lines of research in interactive IR have focused onspecific variables. For example, Spink, Goodrum, and Rob-ins (1998) investigated the role of search intermediary elic-itations (i.e., questions seeking responses from users for thepurpose of “eliciting” information necessary to continue asearch) in on-line searches. They found that intermediaries’elicitations focus mainly on search terms and search strat-egy, but very little on prior searches or domain knowledgeof users. In other words, search intermediaries were mainlyinterested in progressing with the “nuts and bolts” of par-ticular searches, but not as interested in user modeling. Wu(1993) pioneered research on elicitations in on-line search-ing by investigating user elicitations of search intermediar-ies. She found that in the presearch phase of interaction,over half of user elicitations concerned search terminology.Overall, however, most user elicitations occurred during the

916 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

on-line phase of searches. Twenty-eight percent of userelicitations during this phase concerned search terminology,and 21% were questions regarding search strategy. Theother half of all elicitations during the on-line phase con-cerned topics such as databases, other information services,social (off search topic) issues, etc. Finally, the OKAPIstudies have produced a large volume of research on inter-faces, interactive feedback, and other aspects of interactiveIR, as well (Beaulieu, 1997; Robertson, 1997).

Current Models of IR Interaction

Traditional models of information retrieval interactiondescribe only minimally the dynamic nature of this phe-nomenon. In this section, four models that attempt to de-scribe further the ways in which IR interaction is dynamicare explored. These four models are: (i) Saracevic’s (1997a)stratified model of interactive IR; (ii) Belkin’s (1996) epi-sodic model of IR interaction; (iii) Spink’s (1997) interac-tive feedback and search process model; and (iv) Ingwers-en’s (1996) global model of polyrepresentation.

Each of these models presents an alternative view to thetraditional model of information retrieval. The traditionalmodel of IR is based on the notion that IR takes place ontwo separate tracks; that is,systemanduser. Essentially, thetraditional model of IR holds that IR systems are comprisedof texts that are represented and organized to facilitateretrieval. On the other side of the model, users approach IRsystems with some information problem/need that they rep-resent in the form of a research question that must be furtherreduced to a query. These two tracks meet at the point wherequeries and organized files are compared. The results of thecomparison (i.e., system output) are presented to the user(i.e., feedback) in the form of IR system output.

The inadequacies of this model have been exposed by IRinteraction research. First, the traditional model does notaccount for the complexities of interaction: (i) among hu-mans (e.g., users and search intermediaries); (ii) betweenhumans and IR systems and the iterative nature of such; and(iii) with respect to feedback as shown by Spink (1993,1997). Similarly, the model does not account for complex-ities exhibited by indexers, and organization schemes asshown by past research (Leonard, 1975). To address theseinadequacies, a small but growing number of researchers arebeginning to directly address the problems of interaction inIR. The three mentioned above are elucidated in the follow-ing sections.

The stratified model of IR interaction (Saracevic, 1997a)includes the two-track aspect of the traditional model,shown by the two arrows indicating adaptation. The differ-ence is that this model accounts for multiple dimensions ofuser involvement in IR processes. That is, Saracevic’smodel accounts for user environment and situation, in ad-dition to user knowledge, goals, intent, beliefs, and tasks.This accounting for the broader environment is similar to,and perhaps based on, Ingwersen’s (1992, 1996) notions ofwork-task interest domain. This model improves on the

traditional model by showing the complexity of a user’senvironment. One potential weakness of the model lies in itslack of description of temporal effect. Saracevic notes “dur-ing IR interaction, as it progresses, these deeper level cog-nitive and situational aspects in interaction can and often dochange—problem or question is redefined, refocused, andthe like” (p. 7). Notwithstanding, no mention of the effectsof time and iteration is included in his model. One of theaims of this study is to observe shifts of focus by partici-pants in IR interactions. This phenomenon of shifting focioccurs, necessarily, as a function of time. The following twomodels do bring the dimension of time into their treatmentof interactive IR.

Belkin’s notions of IR interaction are based on his commit-ment to his anomalous states of knowledge hypothesis (ASK)(Belkin, 1980; Belkin et al., 1982), discussed in a previoussection. ASK was also the basis for MONSTRAT (Belkin etal., 1987). However, the episodic model represents a signifi-cant advance over MONSTRAT. Two major improvementsover MONSTRAT are present in the episodic model. First, themodel shows how many of the same events in IR interactionrepeat themselves. In this way, the cyclic, temporal nature ofIR interaction is displayed by the repeating frames. Second, theepisodic model reorganizes the nine MONSTRAT functionsinto two groups. What Belkin’s model lacks, however, is atreatment of the social/environmental facets of user informa-tion problems. User’s tasks and goals are mentioned, but thereis no mention of the setting from which these tasks and goalsare causally derived. Nevertheless, the episodic model repre-sents a stride forward in providing a research framework forinteractive IR.

Of the current models of IR interaction, Spink’s (1997)interactive feedback and search process model offers themost comprehensive coverage of the complex, cyclical na-ture of IR interaction. Spink (1993, 1997) has studied thenature of feedback in information retrieval, and thus, isconcerned with iteration and periodicity in IR interaction.Her research on the concept of feedback has brought cyber-netics and systems theory into the realm of informationscience. Spink’s (1997) feedback model accounts for timeas a factor in IR interaction, as well as the cycles that occurduring searches. The ongoing element at the top of themodel is search process and strategies, hence, time. Theseprocesses involve any number of cycles. Cycles are definedas processes completed between each search command; thatis, the time and processes between a query (terms typed/combined and entered) and the next query reformulation(again, terms typed/combined and entered). During eachcycle, any number of interactive feedback loops may occur.These feedback loops consist of discussions between userand search intermediary regarding either: (I) content rele-vance, (ii) term relevance, (iii) magnitude relevance, (iv)tactical review, or (v) term review. Therefore, an interactivefeedback loop would be delineated when one of the partic-ipants gives feedback to the other regarding one of the fivetopics above, after which some judgment or action is taken.Spink’s (1997) feedback model has the strength of suggest-

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 917

ing the cyclical nature of IR interaction. A weakness of themodel is its lack of accounting for cognitive changes, orprocesses. We see that tactics, moves, and judgments areincluded, but there is no means of connecting those pro-cesses to changes in the search, such as alternative tactics asa result of a feedback loop.

Finally, Ingwersen (1996) work synthesizes much of themodels reviewed in this section so far. He attempts to modelIR processes from a global perspective. A global perspec-tive holds that all of the factors that influence and interactwith a user, search intermediary, IR system, texts, etc.,should be considered in IR research. The design variablesput forth by Ingwersen show the wide ranging influence offactors such as social environment, IR system, informationobjects, intermediary, and user. Ingwersen incorporatesthese design variables into the notion of polyrepresentation,which may be summarized as follows: (1) redundancy isinherent in information retrieval processes; (2) this redun-dancy may take the form of, for example: (a) identicaldocuments retrieved from different search engines or data-bases, or (b) identical documents retrieved from differentsearches at different points in time; (3) cognitive overlap isthe term for identical items retrieved in the above scenarios;and (4) this redundancy presents an opportunity to increaseretrieval effectiveness. In other words, documents retrievedby multiple searches have a higher probability of usefulnessto a user.

The present study is an attempt to better understand thecomponents of user cognitive space in Ingwersen’s model.User cognitive space is divided into four components:In-formation Need; Problem Space; Current Cognitive State;andWork-Task/Interest Domain. Information need is char-acterized by the ability of a user to state specifically whats/he would like to retrieve from an information systemduring a particular search. Problem space is defined in termsof a user’s uncertainty with regard to his/her search. PS maybe thought of as the gap between what a user knows (currentcognitive state) and his/her ability to express an informationneed (Belkin, 1980). Current Cognitive State is defined aswhat a user knows (or at least thinks s/he knows) at a givenpoint in time, and is characterized by certainty of suchknowledge. Finally, the Work-Task/Interest Domain is theset of environmental and social constraints under which auser seeks information. These constraints are momentarilystatic in nature, according to Ingwersen (1996). Examplesare projects such as dissertations or term papers that requirebackground research, and social environments such as grad-uate school or a business setting.

Ingwersen (1992, 1996) has presented a reasonably com-plete synthesis of research and thought regarding IR inter-action. The main problem with Ingwersen’s (1996) ap-proach is how to get input from user cognitive space into therequest model builder. The differences among the fourcomponents of user cognitive space are subtle. For example,if a user expresses information need from the standpoint ofcurrent knowledge, it is unclear whether the user is statinginformation need, or current cognitive state. Therefore, con-

structing the proper request model builder may be difficult,if even possible. Nevertheless, Ingwersen has presented amodel that has “plausible validity.” His models are based onsolid ground, both conceptually, and with empirical evi-dence. However, the empirical evidence on which his hy-potheses are based represents a synthesis of many differentstudies, only one of which was done by Ingwersen. This isnot necessarily a negative, but it is a caveat when studyingIngwersen.

Another View of the Dynamic Nature of Interactive IR:Focus Shifts

Belkin (1984; Belkin et al., 1987) and Brooks (1986)were the first to use discourse analysis to investigate focusshifts in interactive IR. In fact, Brooks describes a methodby which certain words in dialog cued a shift of focus.Examples of these words are “ok,” “well,” or “all right,then.” Focus shifts were not a major component of Brooks’dissertation, but she did break discourse into units indicatedby focus shifts.

Xie (1997; 1998, in press) looked at the nature of shiftsin interactive IR from the perspective of search tactics andmethods. Her study sought to describe the extent to whichusers engage in “planned” or “situated” search activities.That is, she assumed that users had goals entering a search,but that the search itself was composed of both plannedstrategic activities, and of strategies that had to be improvedduring a search in order for users to achieve search goals.The latter, situated activities, occur because of problematicsituations in the search environment. She found that userbehavior during a search, however, was motivated by sub-goals, i.e., intentions, and for sort of intention, a user em-ploys a specific set of search strategies. Users shifted amongdifferent types of search strategy according to the followingcategories: planned; opportunistic, assisted, and alternative.These shifts are the result of factors such as search outcome,user’s immediate and long-term goals, a user’s domainknowledge, IR system knowledge, knowledge of informa-tion seeking, environmental/situational factors.

Robins’ (1997, 1998a, 1998b) conception of shifts isfrom the perspective of information problem, but not spe-cifically strategy in an information seeking environment.His research focuses on how discourse in interactive IRdemonstrates how users/search intermediaries are “treating”the problem. This information problem finds some similar-ity in Xie’s (1997, 1998) “goals.” However, informationproblem is a broader concept, including not only goals andobjectives for information seeking, but it included aspectsof user’s social environment, and anything else that mightshape/impact a user’s searching behavior.

The discussion so far has been about research that re-quires an understanding of discourse structure, because weare assuming that information may be gleaned from thediscourse among participants in interactive IR. Grosz andSidner (1986) present a comprehensive theory of discoursestructure. They show that segmentation of discourse is oc-

918 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

curs naturally in any discourse. Such segments are boundedby “cue phrases” that indicate a change in a conversation’sfocus. These cue phrases vary from the subtle to the obvi-ous, and therefore, discourse must be studied carefully toperceive such cues. Grosz and Sidner argue the existence ofthree components of discourse structure: (1) linguistic struc-ture, (2) intentional structure, and (3) attentional state. Lin-guistic structure may be thought of as naturally occurringsegmentation of discourse (shifts of focus). Linguistic struc-ture has syntactic qualities, as opposed to intentional struc-ture or the meaning/purpose of each discourse segment. Theattentional state is a device for displaying the properties andobjects of discourse segments, and their relationship to othersegments.

For the purposes of this study, focus shifts are anychange in focus of the interaction between a user and asearch intermediary with respect to the user’s informationproblem. Change of focus is denoted by a change in sometopical aspect of the conversation (by shifting to a differentaspect of the topic, or by broadening or narrowing the topic,etc.), or by some shift to a nontopical aspect of the infor-mation problem (again, by narrowing or changing). In otherwords, a shift may occur between or within topical ornontopical discussions, or, shifts may occur laterally orhorizontally within a topical or nontopical discussion.Brown and Yule (1983) characterize “topic-shifts” (p. 94) inthe following way.

. . . between two contiguous pieces of discourse which areintuitively considered to have two different ‘topics,’ thereshould be a point at which the shift from one topic to thenext is marked. If we can characterize this marking oftopic-shift, then we shall have found a structural basis fordividing up stretches of discourse into a series of smallerunits, each on a separate topic. (pp. 94–95)

This study attempts to find the points at which theseshifts occur. In the next section of this report, the questionsguiding the research are stated. The following section de-scribes how the research will attempt to answer the ques-tions.

Research Questions

The following questions guided the research in thisstudy. (1) What is a focus shift in interactive IR, and howoften do they occur? (2) What is the intensity of focus onany given aspect of UIP? (3) Which participants in a me-diated search dyad initiate focus shifts? (4) On what facetsof user information problems, and for how long, do partic-ipants focus?

In the following section, the research design for thisstudy is outlined, and results are presented, followed by adiscussion of the findings in light of this review of relatedliterature.

Research Design

The aim of research in this article is to describe thenature of interaction shifts between search intermediaries

and users. The following research design is used to illumi-nate such shifts.

Data Corpus

Interaction with information retrieval systems is beingconducted by solitary end-users. However, there are advan-tages in observing intermediaries and users as they interactwith an IR system. The main advantage is the fact that theyspeak to one another about the process in which they areengaged. Their conversation gives a researcher access to theparticipant’s thoughts; opportunity to directly observe thebehaviors of people interacting with information systems.

Saracevic and Su (1989) collected the data during aprevious study. The study was funded by a grant from theLibrary Research and Demonstration Program, UnitedStates Department of Education (ref. No. R039A80026),with additional funding by DIALOG, and entitled,Natureand Improvement of Librarian–User Interaction and OnlineSearching for Information Delivery in Libraries. The dataconsists of transcribed discourse (originally videotaped) be-tween real users and professional search intermediaries dur-ing authentic information retrieval interactions. All of theusers were either graduate students or faculty in pursuit ofsome particular research goal. In summary, 40 searcheswere taped, amounting to over 46 hours of video. Eachsearch averaged nearly 70 minutes (13 minutes presearch;56 minutes on-line). Four search intermediaries participatedin the study, each averaging over 8 years of search experi-ence. The searches covered a wide variety of topics (46different databases were used). In addition, a survey wasadministered to users and search intermediaries both prior toand after each search. Some of the results of these surveysare alluded to later in this report.

This data corpus has been used in a number of studiessince it was collected. At least three dissertations werebased on this data (Robins, 1998b; Spink, 1993; Wu, 1993).It has provided the opportunity for researchers to study, andreport, the nature of IR interaction in conference proceedingand journals (Saracevic, Mokros, & Su, 1990; Saracevic,Mokros, Su, & Spink; 1991; Spink & Saracevic, 1998;Spink et al., 1998). The fact that these data have generatedso much research is a testament to its uniqueness, its rich-ness, and to its pertinence to interactive information re-trieval studies. That is, there are no other data sets of thismagnitude that give evidence to real interactive IR situa-tions. These data have provided the basis for research onelicitations, focus shifts in interactive IR, feedback in IR,user modeling (Saracevic, Spink &. Wu, 1997), and analy-ses of social roles in information discourse (Mokros, Mul-lins, & Saracevic, 1995). Furthermore, the research basedon these data indicates the importance of studying the roleof mediation in interactive IR. Regardless of whether thereis a move toward “disintermediation” in information trans-fer, and regardless of whether end-users initiate the majorityof searches, these data provide a “natural think-aloud” sit-uation. Users and search intermediaries, in collaboration,

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 919

discuss strategies, terminology, and anything related tosearches, all of which would not be discussed aloud if wewere observing an end-user. In addition, observations ofmediated interactive IR help researches to model IR pro-cesses (Saracevic et al., 1997). This is not to deny the valueof think-aloud protocols for the study of end-users, but theobservation of mediated interactive IR provides a unique setof data.

Twenty of the 40 transcripts are analyzed in this study.Both user and search intermediary utterances are included inthe analysis. The transcripts are drafted as utterances. Thatis, they are written in the form of:

Speaker A: speaks until interrupted. . .Speaker B: interruptionSpeaker A: . . .continued statement after interruptions.

The above interaction consists of three utterances.

Methodology

Twenty of the above-mentioned interactions were ana-lyzed in detail for this preliminary study. The goals pre-scribed by the research questions require that the method-ology do the following: (1) identify shifts within eachdialog, (2) classify each shift according to its type andfunction, and (3) quantify the number of shifts and utter-ances in each shift. Each of these facets of the methodologyis discussed in the following sections.

Identification of ShiftsInteraction shifts are defined as any change in focus of

the conversation between the user and search intermediarywith respect to the user’s information problem. Interactionshifts are any change in focus of the interaction between auser and a search intermediary with respect to the user’sinformation problem. Change of focus is denoted by achange in some topical aspect of the conversation (by shift-ing to a different aspect of the topic, or by broadening ornarrowing the topic, etc.), or by some shift to a nontopical

aspect of the information problem (again by narrowing orchanging). In other words, a shift may occur between orwithin topical or nontopical discussions, or, shifts mayoccur laterally or horizontally within a topical or nontopicaldiscussion.

Belkin, Brooks, and Daniels (1987) identified “focusshifts” (p. 85) in their analysis of user modeling functions ofintermediaries in presearch interviews. They found thatintermediaries initiated most coding shifts, a notion consis-tent with other discourse analysis literature that suggeststhat shifts in dialog are initiated by participants with higherstatus (Grosz, 1981). Belkin et al used, in part, certaindialog cues to identify the points at which shifts took place.These cues have been referred to as “frame words” (Sinclair& Coulthard, 1975). For example, such cues might beutterances which contain frame words such as “well,”“now,” “right,” “ok,” or “good.” Such words in (particularlyat the beginning of) an utterance may indicate that thespeaker has begun to think about changing the focus of thediscussion.

Classification of Shifts

Coding SchemeThe next step toward describing the focus of shifts in IR

interaction is to develop a coding scheme that characterizessuch foci. Because this study’s objective is to understanddimensions of user information problems, it is necessary toderive the shift level coding from the utterance level codingdescribed above. The reason for the derivation is that bothcoding schemes seek to describe the same phenomenon ontwo different units of analysis (utterances and shifts). Afteridentifying and analyzing each discourse segment, 10 cate-gories of focus were derived for the analysis of focus shifts.Table 1 presents the coding scheme for foci analysis oftranscripts.

Quantification of Shifts and Utterances Within ShiftsTo show a clearer picture of the nature of shifts in IR

interaction, it is necessary to provide an account of the

TABLE 1. Shift level coding scheme.

Code Description

DOC Focus on documents to be retrieved as a result of the search; factors such as availability, cost, and format of such documentsEVAL Focus on judgments regarding the relevance, magnitude, etc., of system outputI Indiscernible passageSNSR Discussion of social issues NOT related to the searchST Discussion related to the experiment itself (e.g., videotaping)STRAT Concerned with the strategies, term selection, etc., leading to query formulation or reformulationSYS Focus on explanations, preparations, or problems with the IR system itselfTECH Discussion of technical issues related to the equipment (computers, etc.) associated with the search (ranging from technical

errors such as typographic errors, to printer paper jams)TOPIC Focus on the specific subject area and parameters (e.g., experiments on humans, not apes) guiding the searchUSER Focus on user’s background including work, education, other experience; participant’s stage/progress in the project at hand;

user’s or search intermediary’s social life that forms a context for the present search; participant’s prior searching on thetopic at hand; domain/literature knowledge; impetus for search

920 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

number of shifts that occur in each interaction, and inaggregate. Accordingly, shifts are to be tabulated by pre-search, on-line and total, for types and functions of shifts.

Limitations

As mentioned in the previous section, the type of re-search represented by this study has advantages and disad-vantages. One limitation of this study is that it is notpossible to interview users and search intermediaries inorder to triangulate what was found in the transcripts. Thedata were collected in 1989 in New Brunswick, NJ, and it isnot possible to find those involved. Although there wassurvey data to augment the transcripts, it would have beenhelpful and informative to obtain in-depth responses per-taining to certain issues. In particular, it would have helpedto establish the degree to which information problemschanged during each interaction. If the dynamics of infor-mation problems (i.e., changes in information problems)could have been studied, the scope of this project could havebeen increased.

Results

The results of this study are compiled to address theresearch questions stated earlier. This section, therefore, isorganized around the four research questions that guided thestudy. The purpose of the research questions is summarized:(1) identify and count the frequency of focus shifts, (2)classify the facets of user information problems amongwhich participants shifted their focus, (3) measure the in-tensity of focus on any given aspect of UIP, and (4) identifyparticipants in a mediated search dyad who initiated focusshifts.

Identification and Frequency of Focus Shifts

Identification of focus shifts required the researcher toestablish: (a) that interaction participants had changed theintentional focus of their discourse; and (b) at what pointsduring the interaction the focus began and ended (i.e., astarting and ending point for the focus). The passage inTable 2 exemplifies discourse segmentation as operational-ized in this study. A horizontal line indicates the point ofshift from one focus to another.

Each of the segments in Table 2 is separated because oneof the participants changed the focus of the discourse. Forexample, the transition from Focus 7 to Focus 8 is a case inwhich the search intermediary shifts from talking about aspecific search term, to reformulating a query by extractingnew terms from the results.

A total of 1439 such shifts of focus were identified(presearch5 355, on-line5 1084) (see Table 3). In all,roughly one-fourth of the shifts occurred in the presearchphase; three-fourths occurred in the on-line phase. Withincases, presearch shifts ranged from 8.33 to 40.00% of with-in-case totals (on-line ranged from 60.00 to 91.67%).

As Table 3 shows, the majority of time in each interac-tion is spent after the participants go on-line. Therefore, it isnot surprising to find in Table 3 that in all cases there aremore focus shifts in the on-line phase of the interaction thanin the presearch phase. This fact indicates that: (a) partici-pants spend more time interacting with IR systems and textsthan they do in presearch modeling, and (b) that the intro-duction of IR systems and texts increases the complexityand problematic nature information seeking tasks, thus sup-porting Xie’s (1998) notion that information seeking is, atleast in part, situated.

After focus shifts had been identified and counted, thenext step was to classify them according to their relationshipwith each user’s information problem.

Classification of Focus Shifts

This section deals with the results of applying the codingscheme to the discourse segments (i.e., shifts of focus). Foreach category introduced in Table 1, observed occurrencesand percentages are compiled. Table 4 shows that partici-pants focused on search strategy and evaluation issues inover 60% of all foci. The remaining 40% of foci werespread across issues related to, most notably, the IR systemitself, the overall topic of the search, user backgroundissues, and document formats and availability. It is of in-terest to note that TOPIC and USER foci constituted onlyabout 15% of all foci. These two categories might beindicators that participants, especially search intermediaries,are modeling users information problems. However, withsuch a small representation, it may be said that either: (1)information problems are not being modeled extensively, or(2) modeling occurs in ways that cannot be determinedthrough discourse. What is actually happening in thesecomplex processes is probably a combination of these twopossibilities.

Now that we have a picture of the types of focus shiftsthat take place in interactive IR, let us look at some of thedynamics of what happens within and among foci. In par-ticular, it is of interest to see how long participants spend onany given foci, and how often a shift of focus occurs.

Intensity of Focus Shifts

To compare shifts of focus between cases, one mustdetermine the rate at which shifts of focus occurred in allinteractions, and then test to see whether there is significantvariance among the cases. Rate is calculated as the numberof utterances spent by the participants during each focus.Utterances per Focus were averaged within each case andinteraction phase (see Table 5). Overall, the number ofutterances per Focus in the presearch phase of interaction(6.67) was insignificantly lower than the on-line phase(7.12) and the total (7.01). The presearch mean was 6.99utterances per Focus with a standard deviation of 2.43.On-line and total means were 7.28 and 7.17 respectively,with standard deviations of 1.98 and 1.90, respectively.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 921

These findings show that participants in interactive IRshift focus among dimensions of user information problemsin rapid succession. Rarely do they spend any great lengthsof time on any given topic, as evidenced by the relativelylow standard deviations from the mean utterances per focus.

Initiators of Focus Shifts

Another pertinent description of the characteristics offocus shifts regards the initiator of shifts. That is, one mightask, “Which of the participants in the interactions were mostresponsible for initiating changes in focus?” The answer tothis question may give clues concerning the active/passive

nature of each participant. Tables 6 (expressed as frequen-cy), and 7 (expressed as percentage) shows that, in all buttwo interactions, search intermediaries initiated shifts offocus more often than users did. In fact, over the course ofthe 20 interactions studied here, search intermediaries ini-tiated two-thirds of all focus shifts, and this ratio heldconstant over the presearch and on-line phases.

In the two cases in which user initiation of shifts of focusoutnumbered those by the search intermediary (cases #6 and#12), most of the difference occurred during the on-linephase of the search. In case #6, the search intermediaryinitiated seven focus shifts to four by the user during thepresearch phase. However, in the on-line phase, the user

TABLE 2. Example of discourse segmented into intentional foci.

Shft. # Spkr Utterance

1 I: You will see. You will see. (Printing) This is going to showing you the titles of the articles and what arethe index terms they are put under (Printing . . .) and assoon as it stops we will go back. (Printing,Intermediary scrolled back the screen . . .) O.K. This is the first one, uh . . . this is the title, current statusof fall armyworm host strains . . . andthen these are the indexes, the index terms and key words and theseare the different concept codes they put under . . . put in. Uh . . . can youtell by the title is this somethingthat’s relevant for you . . .

U: Uh . . . notfrom that particular title . . .2 I: for you? O.K. Let’s look at the next one.3 U: Doesn’t it have author’s name in there?

I: O.K., that will be when we actually get to the right combination we will be printing out with the author,references and the abstracts . . . oh,this is just a test format. Just to see if we are in the right area.

U: Um . . .4 I: (Scrolling back the screen) How about this one?

U: It’s hard to say (smiling).I: O.K. Can you tell by the . . .See, these are different . . . ohhere, they have something called

phytopathology-parasitism and resistance . . .Will that . . . that’s not? . . .that’s not . . .U: No, not necessarily what I am looking for.

5 I: O.K. (keying in on computer) Oh, this one is pasticide resistance . . .Well, you are the one that has to tellme which items are good, which are not, you know, relevant, because that’s how I can adjust it.

U: I am not . . .(laughing) I don’t know.6 I: Do you . . . do youhave a known paper? I mean, do you have a specific paper . . . that on that’s like what

you want to get? because I can call that up and see how it’s indexed. (keying in . . .)U: O.K. I am not sure. You can do an author search on . . .I: O.K.

U: Brattsten, that should be at least one paper that’s is in that area. I just don’t remember, because that’s morethan 10 years ago.

7 I: O.K. Let me try. What’s the name of the author?U: Brattsten.I: How do you spell his name?B . . .

U: R A T T S T E N.I: Do you have his first initial?

U: BRATT . . .I: Oh, BRATTSTIN.

U: TEN.I: ST? . . .

U: EN.I: O.K. Do you have the first initial?

U: L.8 I: O.K. . . .O.K. Let me try that. (keying in . . .computer printing) 15 items, let me see if there is

any . . .None of his papers are in that . . . this larger set, the 118. You said, you thought that he hadsomething that has to do with cyanide? Let’s try no.5 . . . (typing, and computer printing . . . ) O.K. Let’slook at these two . . .O.K. This one . . . see theindex terms are under cyanide, glycoside, cyanogenicglycoside . . .linamarin . . .These are the things that it’s indexed under.

U: All right.I: Concepts such as invertebrate, insect physiology . . . uh . . .animal ecology, general biochemistry,

biochemical studies, proteins, peptides and amino acids . . .

922 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

initiated 39 focus shifts compared to only six by the searchintermediary. In case #12, the differences were not pro-found. Initiation by the user and search intermediary werenearly even in the presearch phase (23 and 24, respectively),and the user initiated only slightly more during the on-linephase (40 to the search intermediary’s 34). Because thesetwo cases represent what appears to be an aberration, it isnot pursued in this study. However, this type of interactionmay be an area for future research.

Discussion

The research to this point has laid a foundation to exploreshifts of focus among dimensions of user information prob-

lems in IR interaction, and any patterns that may emergefrom that study. Few, if any, patterns of behavior werefound in this study through traditional methods such asloglinear analysis. Certain patterns, such as the fact thatevaluative foci were not found in the presearch phase, werenot unexpected, and therefore, not discussed. However, thefollowing four points (among many possible points) aresuggested by the research in this project: (1) focus shiftsoccur rapidly, (2) participants concentrate on strategy andevaluation, (3) interactive IR appears to be chaotic, and (4)there is only moderate evidence of changes in informationproblem conception. Each of these points is discussed in thefollowing sections.

Focus Shifts Occur Rapidly

As shown in Table 5, participants shifted focus, onaverage, 7.01 utterances (6.67 during the presearch phase,and 7.12 during the on-line phase). The difference betweenthe on-line and search phase is not statistically significant.Therefore, patterns that may be attributed to search phaseare nonexistent.

However, the fact that participants shift topics of discus-sion so rapidly is worth noting for two reasons. First, thisfinding is consistent with notions of sensemaking (Dervin,in press; Weick, 1995) notions of behavior and cognition. Inother words, in complex, novel activities such as interactive,mediated information retrieval, people depend on environ-mental cues and stimuli to construct notions of informationproblems and strategies to work through those problems.Therefore, it is possible that participants in IR interaction do

TABLE 3. Frequency and percentage of shifts for each interaction.

Case

Frequency Percentage within case

TotalPre On-line Total Pre On-line

2 20 30 50 40.00% 60.00% 100.00%3 9 99 108 8.33% 91.67% 100.00%4 18 28 46 39.13% 60.87% 100.00%5 17 29 46 36.96% 63.04% 100.00%6 11 45 56 19.64% 80.36% 100.00%7 6 15 21 28.57% 71.43% 100.00%8 23 53 76 30.26% 69.74% 100.00%9 12 55 67 17.91% 82.09% 100.00%

10 15 39 54 27.78% 72.22% 100.00%11 16 55 71 22.54% 77.46% 100.00%12 47 74 121 38.84% 61.16% 100.00%14 9 23 32 28.13% 71.88% 100.00%15 35 81 116 30.17% 69.83% 100.00%16 14 79 93 15.05% 84.95% 100.00%17 15 37 52 28.85% 71.15% 100.00%18 17 69 86 19.77% 80.23% 100.00%19 12 70 82 14.63% 85.37% 100.00%20 26 102 128 20.31% 79.69% 100.00%21 22 74 96 22.92% 77.08% 100.00%22 11 27 38 28.95% 71.05% 100.00%Total 355 1084 1439 24.67% 75.33% 100.00%averages 17.75 54.20 71.95

TABLE 4. Frequency and percentage of codes related to foci levelanalysis.

Frequency Percentage

Pre On-line Total Pre On-line Total

STRAT 188 449 637 13.06% 31.20% 44.27%EVAL 0 227 227 0.00% 15.77% 15.77%SYS 32 109 141 2.22% 7.57% 9.80%TOPIC 67 44 111 4.66% 3.06% 7.71%USER 41 54 95 2.85% 3.75% 6.60%DOC 1 72 73 0.07% 5.00% 5.07%SNSR 10 49 59 0.69% 3.41% 4.10%TECH 1 35 36 0.07% 2.43% 2.50%ST 11 25 36 0.76% 1.74% 2.50%I 4 20 24 0.28% 1.39% 1.67%Total 355 1084 1439 24.67% 75.33% 100.00%

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 923

not rely exclusively on problem representations as causalagents in rational decision making. In fact, they may rely onactive construction of such representations. If this is true,

then these representations must be neither a priori nor adhoc, because both would lack the dynamic character sug-gested by rapid shifts among problem dimensions (especial-ly when there appears to be nearly as much problem spaceas current cognitive state in cognitive space). That is, theformer condition suggests a static starting point for repre-sentation, and the latter suggests that representation is theculmination of activity. Therefore, because participants donot dwell for long periods on any one topic, and becausethere is no clear pattern of transition from one topic to thenext, the findings suggest a chaotic approach to informationproblems by participants. At the same time, however, thefact that a majority of utterances were indications of partic-ipants’ current cognitive state and problem space also sug-gests that participants were also moving between uncer-tainty and certainty with respect to information problems.These findings strongly suggest that participants were in acontinuous state of constructing knowledge states based onnew information gained through focus on various aspects ofuser information problems.

Another reason for noting the rapid nature of focus shiftsis the implications of this phenomenon for IR system de-sign. Automated intermediary design requires that sometype of knowledge base be present to assist end-users witha search. In addition, or as an alternative means of intelli-gence, some way accumulating knowledge from end-users(e.g., neural nets) must be present. These two systemsapproach intelligent IR from the top-down, and bottom-up,respectively. Systems may also be developed that take ad-vantage of both methods (e.g., Mizoguchi, Tijerino, &Ikeda, 1995). In any case, however, modeling users fromverbal, interactive behavior is problematic. One of the main

TABLE 5. Utterances per focus by case and search phase (i.e., presearch, on-line, total).

Case

Presearch On-line Total

U F U/F U F U/F U F U/F

2 129 20 6.45 142 30 4.73 271 50 5.423 65 9 7.22 661 99 6.68 726 108 6.724 192 18 10.67 245 28 8.75 437 46 9.505 85 17 5.00 301 29 10.38 386 46 8.396 73 11 6.64 233 45 5.18 306 56 5.467 32 6 5.33 135 15 9.00 167 21 7.958 169 23 7.35 492 53 9.28 661 76 8.709 136 12 11.33 508 55 9.24 644 67 9.61

10 194 15 12.93 352 39 9.03 546 54 10.1111 155 16 9.69 558 55 10.15 713 71 10.0412 181 47 3.85 434 74 5.86 615 121 5.0814 40 9 4.44 141 23 6.13 181 32 5.6615 184 35 5.26 483 81 5.96 667 116 5.7516 102 14 7.29 409 79 5.18 511 93 5.4917 93 15 6.20 249 37 6.73 342 52 6.5818 138 17 8.12 534 69 7.74 672 86 7.8119 80 12 6.67 681 70 9.73 761 82 9.2820 136 26 5.23 549 102 5.38 685 128 5.3521 144 22 6.55 511 74 6.91 655 96 6.8222 40 11 3.64 96 27 3.56 136 38 3.58Total 2368 355 6.67 7714 1084 7.12 10082 1439 7.01

TABLE 6. Frequency of initiation of shifts by either user or searchintermediary (SI) within case.

Case

Presearch Online

Total(Presearch1 Online)

Totalfoci

SI User SI User SI User per/case

2 15 5 20 10 35 15 503 6 3 60 36 66 39 1054 11 7 20 7 31 14 455 16 1 19 10 35 11 466* 7 4 6 39 13 43 567 6 0 12 3 18 3 218 12 11 37 16 49 27 769 5 4 37 18 42 22 64

10 12 3 36 3 48 6 5411 12 4 42 13 54 17 7112* 24 23 34 40 58 63 12114 8 1 21 2 29 3 3215 22 13 59 22 81 35 11616 12 2 45 34 57 36 9317 11 4 29 8 40 12 5218 11 6 46 23 57 29 8619 9 3 47 23 56 26 8220 13 13 70 32 83 45 12821 16 6 72 23 88 29 11722 11 0 17 8 28 8 36Total 239 113 729 370 968 483 1451

An asterisk (*) indicates cases in which shifts of focus were initiated bythe user more often than by a search intermediary.

924 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

problems is that conversation in interactive IR is choppy;that is, participants do not talk about any one topic for verylong. If, in order to model users’ knowledge, machinesrequire patterns that must be identified by way of heuristics,then (i) heuristics are difficult to generate because of lack ofpatterns, and (ii) assuming heuristics can be developed,machines will find it difficult to match heuristic patterns toreal conversation. Therefore, using information problemdimension foci as a means of building knowledge baseswould appear to have limited application. However, the factthat a majority of interaction foci concentrated on searchstrategy and terms means that evaluation of output may beof some use to systems designers.

Concentration on Strategy and Evaluation

Together, strategic and evaluative foci constitute 60.04%(44.27 and 15.77%, respectively) of all foci occurrences.This finding indicates that the majority of foci dealt withinput and output from the IR system, that is, the immedi-ately practical aspects of information retrieval. In addition,the notion of terminological determinant in interactive IR assuggested by Saracevic, Mokros, and Su (1990) is supportedby these findings. Essentially, terminological determinantholds that the main purpose of any interaction in IR isdirected toward generating terms to formulate queries. Stra-tegic foci alone account for 44.27% of all foci (13.06%presearch; 31.20% on-line), the most frequent of all cate-gories. One of the conditions in a focus that makes it

possible for a focus to be coded STRAT is discussion ofsearch terms.

One question raised by the concentration on strategy andevaluation regards user modeling. For example, Belkin(1984) suggests user modeling in IR interaction as a meansof developing intelligent intermediaries. Focus categoriesthat might be related to user modeling include TOPIC andUSER. TOPIC and USER categories constitute only14.31% (7.51% presearch; 6.81% on-line) of all foci in thestudy. The fact that these foci occur almost equally duringthe presearch and on-line phases of interaction indicatesthat, if user modeling is occurring, it is occurring as muchduring the on-line phase as in the presearch. Therefore, usermodeling, if it occurs, is an ongoing process. This assertionsuggests that user modeling could be discussed in terms ofsensemaking, because one of the properties of sensemaking,according to Weick (1995) is that it is an ongoing process.In any case, however, the following questions may be askedabout user modeling in interactive IR.

(1) To what extent is user modeling a function of termino-logical determinant? In other words, a search interme-diary may have no interest, or even need, to know muchabout a user to perform an effective search. If the searchintermediary can (i) generate terms for IR system input,and (ii) receive reasonable output, and (iii) the user issatisfied, then there is no reason to spend resourceseliciting detailed information about the user. This no-tion is consistent with Weick’s (1995) assertion thatsensemaking requires plausibility above accuracy.

(2) To what extent can the quality of user modeling be

TABLE 7. Initiation of shifts by either user or search intermediary (SI) as percentage of total shifts within case.

Case

Presearch On-line Total (presearch1 on-line)

SI User SI User SI User

2 30.00% 10.00% 40.00% 20.00% 70.00% 30.00%3 5.71% 2.86% 57.14% 34.29% 62.86% 37.14%4 24.44% 15.56% 44.44% 15.56% 68.89% 31.11%5 34.78% 2.17% 41.30% 21.74% 76.09% 23.91%6* 12.50% 7.14% 10.71% 69.64% 23.21% 76.79%7 28.57% 0.00% 57.14% 14.29% 85.71% 14.29%8 15.79% 14.47% 48.68% 21.05% 64.47% 35.53%9 7.81% 6.25% 57.81% 28.13% 65.63% 34.38%

10 22.22% 5.56% 66.67% 5.56% 88.89% 11.11%11 16.90% 5.63% 59.15% 18.31% 76.06% 23.94%12* 19.83% 19.01% 28.10% 33.06% 47.93% 52.07%14 25.00% 3.13% 65.63% 6.25% 90.63% 9.38%15 18.97% 11.21% 50.86% 18.97% 69.83% 30.17%16 12.90% 2.15% 48.39% 36.56% 61.29% 38.71%17 21.15% 7.69% 55.77% 15.38% 76.92% 23.08%18 12.79% 6.98% 53.49% 26.74% 66.28% 33.72%19 10.98% 3.66% 57.32% 28.05% 68.29% 31.71%20 10.16% 10.16% 54.69% 25.00% 64.84% 35.16%21 13.68% 5.13% 61.54% 19.66% 75.21% 24.79%22 30.56% 0.00% 47.22% 22.22% 77.78% 22.22%Total 16.47% 7.79% 50.24% 25.50% 66.71% 33.29%

An asterisk (*) indicates cases in which shifts of focus were initiated by the user more often than by a search intermediary.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 925

judged as a function of the amount of visible effort to doso? Because only 14.31% of all foci were attributed tocategories that suggest user modeling, does this meanthat only a small amount of user modeling occurs dur-ing IR interaction? Users filled out a question statementbefore the search that could account for search interme-diary knowledge of topic and user background that wasnot discussed during videotaped interaction. The ques-tion statement may have reduced the need for a highquantity of further user modeling. Another possibleexplanation for the apparently small amount of usermodeling is that search intermediaries may not requirea great deal of information about a user to adequatelymodel the user. Finally, if we assume nonrational be-havior on the part of search intermediaries, then theideal of user modeling itself becomes moot. That is, asearch intermediary, under such an assumption, is act-ing without the time to benefit from representation suchas user models. Each new situation presented in IRinteraction requires a search intermediary to make adecision without the benefit of reflection (i.e., throw-ness). There are times, however, in interactions whenparticipants do reflect (i.e., during periods of break-down). However, for the most part, they appear to beacting without a great deal of reflection.

These two questions, however, need to be researchedfurther.

Process without Pattern

That participants shifted rapidly among foci (as men-tioned earlier) is one indication that shifting of focus is achaotic process. Another indication of the chaotic natureof focus shifts is the fact that no patterns could be foundin the loglinear analysis on focus shifts. Loglinear anal-ysis seeks to find statistically significant shifts from onestate to another, and none were found in this analysis.This finding suggests IR interaction to be a highly idio-syncratic process. Each interaction among user, searchintermediary, and IR system brings a different form ofcomplex conditions that bring about different behaviors.It would appear that predicting such behaviors is prob-lematic, at best. Therefore, one question to that needs tobe addressed by future research regards the efficacy ofintelligent intermediaries in situations in which searchintermediaries must make subtle judgments. That is,given the chaotic, rapidly shifting nature of human me-diated IR interaction, to what degree of subtlety canintelligent intermediaries be expected to interpret userinformation problems?

Moderate Evidence of Change in User InformationProblem

One question that may be generated from this study dealswith changes in user information problems that occur duringIR interaction. The survey data show that users reported, onaverage, only moderate changes when they responded to the

following questions regarding: (a) their perception of theirinformation problem (2.3 on a scale of 5); (b) changes intheir original search question (2.4 on a scale of 5); or (c)changes in their personal knowledge (2.56 on a scale of 5).

In addition, the inductive portion of focus shift analysis(i.e., coding scheme development) uncovered no evidenceof foci related to changes in user information problems.That is to say, there was not enough evidence to warrant acategory. Only very isolated instances were found. Thisfinding is probably due to the stability and high definition ofuser information problems in this study. Users in this studyranked their problem definition to be relatively high (anaverage of 3.95 on a scale of 5). Although, on average, usersreported that they were in early stages of their research (2.45on a scale of 5), most were beginning research on a disser-tation. This means that they had finished course work, andtherefore, had some domain expertise. In addition, most hadwell formed question statements such as the one found inAppendix A. Therefore, most seemed to have a good idea of(i) the nature of the literature to be found in a search, and (ii)the quantity of literature to be found on the topic. Thisfinding supports, once again, that Ingwersen’s notion thatstable, well-defined information problems result in searcheswith limited uncertainty.

However, there were exceptions to the norm. Seven usersresponded with higher average rankings to the three ques-tions mentioned above. To be included in this group, usersmet the following criteria: users must have responded 3 orhigher (on a scale of 5) on two of the three questionsmentioned above. When those seven users’ responses wereisolated and averaged, the following rankings were found tothe questions mentioned above regarding: (a) their percep-tion of their information problem (4.0 on a scale of 5); (b)changes in their original search question (3.4 on a scale of5); or (c) changes in their personal knowledge (3.1 on ascale of 5).

When this group of seven was compared to the other 13users studied, at-test comparison of means showed a sig-nificant difference between groups on the first two questions(p , 0.001, andp , 0.008, respectively). An identicalcomparison on the third question revealed no significantdifference. This may account, in part, for the level ofuncertainty found in earlier phases of this study.

The significance of this finding for focus shifts, however,is that only negligible evidence of focus on changes inproblem conception was found by studying foci. Perhaps itwould be more fruitful to study changes in informationproblems during the course of IR interaction in situations inwhich such problems are not stable, or well formed. Forexample, Kuhlthau’s (1991) subjects were much younger,and perhaps less skilled at formulating research questions.Such a population may increase the odds of finding changesin information problems. Then, if researchers can see howsuch changes occur profoundly, they may be able to identifychanges when they occur subtly.

926 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000

Conclusion: Implications and Recommendationsfor Research

This study found that users and search intermediariesspent roughly 60% of their time discussing search strategyand evaluating search output. This finding supports similarstudies (Spink et al., 1998; Xie, 1998). In addition, it wasdiscovered that the pace of shifting among foci was veryfast, and that there was little evidence of changes in user’sconceptions of their information problems during the re-trieval process. Finally, no distinct pattern of informationseeking behavior emerged from this analysis. This study,however, offers at least two variables on which IR systemdesigners may concentrate their efforts:

(1) A system might provide a means by which end-usersmay interact with an electronic “strategy assistant” witha knowledge base, or a learning mechanism that caninfer appropriate strategy at any given point in an in-teraction. These systems, ideally, would be customiz-able to individuals. This variable is related to the factthat searchers spent a vast majority of their efforts onsearch strategy.

(2) An IR system might, similarly, provide users with cus-tomizable, learning modules related to evaluation ofdocuments. That is, present methods of ranking IRsystem output are based on static algorithms that rankoutput according to word placement, word frequency,or some combination of these and other variables. How-ever, an IR system might, on the other hand, provideusers with the opportunity to describe why some doc-uments are chosen as relevant, and not simple whetherthey were relevant, thereby aiding machine learning.

Both of these system variables will be aided by knowl-edge of IR interaction dynamics such as that shown in thisarticle. For example, if we know that users shift amongstrategic elements very quickly, we know that whateverfeedback mechanism we provide must be one that allowsquick input from the user. It seems reasonable to assumethat searchers think more quickly than they can act, andtherefore, eliciting feedback from a searcher should notinterfere with his/her thought processes.

Furthermore, this research suggests that information-seek-ing behavior may be more iterative and improvisational thatpreviously thought. This notion is supported by the fact thatparticipants in the information-seeking process shifted theirfocus of conversation very quickly, and in no recognizablepattern. Therefore, models of interactive IR should take intoaccount the complexity of user’s search patterns. Most of themodels in mentioned in the related literature section above dotake such complexity into account, but not from the perspec-tive of focus shifts. An analysis of focus shifts shows thatindividual tasks, or goals, can be broken down into very smallfacets, each represented by a focal point during discourse.Although each of these focal points may support overall goalswith respect to a user’s information problem, they also repre-sent individual differences and idiosyncrasies of searchers.That is to say, that a broader definition of user information

problem as conceived in this article may account for whatappears to be a lack of pattern. Stated another way, these focalpoints illustrate the extent to which individuals deal with prob-lematic situations in interactive IR in different ways. There-fore, an overall model of interactive IR may not adequatelyexplain the process. Individuals are too distinct, as shown bythe ways in which they “randomly” shift focus among facets oftheir information problems (perhaps similar to a berrypickingmodel of interaction) (Bates, 1989). One possible avenue forresearch, which would bypass such a problem, would be do-main analysis. The advantage of such a research programwould be that behaviors within specific domains might tend toreduce the effects of individual differences in interactive IR.

The four research questions regarding shifts of focusmentioned in this section represent a continuing effort tounderstand the dynamic nature of interactive IR. More re-search needs to be done to find patterns in interaction. Theways in which dimensions of information problems arerepresented in interactive IR needs more study because ofthe complex nature of such interaction. Although it is pos-sible to observe representations of information problemdimensions, it is more difficult to observe patterns. It ispossible that patterns do not exist, but it is also possible thatthey do exist. It may be necessary to look outside interactionprocesses to find patterns. Identification of behavioral pat-terns is necessary if our findings are to be incorporated intoIR systems.

More research is needed on the identification and under-standing of user modeling in the interactive IR process. Thepresent study is only an extension of prior work in a fieldstill in its infancy. The findings here suggest that somemodeling does take place, but it is uncertain how much andwhat kind. Similarly, more research is needed that willidentify instances in which users conceptions of informationproblems changes during the course of IR interactions. Allof these efforts will greatly enhance our knowledge of thesearch process and allow the opportunity for designers toincorporate human factors into IR system design.

References

Allen, B.L. (1991). Cognitive research in information science: Implicationsfor design. Annual Review of Information Science and Technology, 26,3–37.

Barry, C.L. (1994). User-defined relevance criteria: An exploratory study.Journal of the American Society for Information Science, 45(3), 149–159.

Bates, M.J. (1989). The design of browsing and berrypicking techniquesfor the online search interface. Online Review, 13(5), 407–424.

Beaulieu, M. (1997). Experiments on interfaces to support query expan-sion. Journal of Documentation, 53(1), 8–19.

Belkin, N.J. (1980). Anomalous states of knowledge as a basis for infor-mation retrieval. Canadian Journal of Information Science, 5, 133–143.

Belkin, N.J. (1984). Cognitive models and information transfer. SocialScience Information Studies, 4, 111–129.

Belkin, N.J. (1996) Intelligent information retrieval: Whose intelligence?In: ISI ’96: Proceedings of the Fifth International Symposium for Infor-mation Science (pp. 25–31). Konstanz: Universtaetsverlag Konstanz.

Belkin, N.J., Brooks, H.M., & Daniels, P.J. (1987). Knowledge elicitationusing discourse analysis. International Journal of Man-Machine Studies,27, 127–144.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000 927

Belkin, N.J., Oddy, R.N., & Brooks, H.M. (1982). ASK for informationretrieval: Part I. Background and theory. Journal of Documentation,38(2), 61–71.

Belkin, N.J., Seeger, T., & Wersig, G. (1983). Distributed expert problemtreatment as a model for information system analysis and design. Journalof Information Science, 5, 153–167.

Belkin, N.J., & Vickery, A. (1985). Information in information retrievalsystems: A review of research from document retrieval to knowledge-based systems (Library and Information Research Report 35). London:British Library.

Borland, P., & Ingwersen, P. (1997). The development of a method for theevaluation of interactive information retrieval systems. Journal of Doc-umentation, 53(3), 225–250.

Boyce, B. (1982). Beyond topicality: A two-stage view of relevance andthe retrieval process. Information Processing & Management, 18, 105–109.

Brooks, H.M. (1986). An intelligent interface for document retrieval sys-tems: Developing the problem description and retrieval strategy compo-nents. Unpublished doctoral dissertation, City University, London,United Kingdom.

Brown, G., & Yule, G. (1983). Discourse analysis. Cambridge, UK:Cambridge University Press.

Dervin, B. (1983, May). An overview of sense-making research: Concepts,methods, and results to date. International Communication Associationannual meeting. Dallas.

Dervin, B. (in press). Chaos, order, and sense-making: A proposed theoryfor information design. In R. Jacobson (Ed.), Information design. Cam-bridge, MA: MIT Press.

Fidel, R. (1985). Moves in online searching. Online Review, 9(1), 61–74.Glaser, B.G. & Strauss, A. (1967). The discovery of grounded theory:

Strategies for qualitative research. Chicago: Aldine.Grosz, B.J. (1981). Focusing and description in natural language dialogs. In

A. Joshi, B. Weber, & I. Sag (Eds.), Elements of discourse understand-ing (pp. 229–346). Cambridge, UK: Cambridge University Press.

Grosz, B.J. & Sidner, C.L. (1986). Attention, intentions, and the structureof discourse. Computational Linguistics, 12(3), 175–203.

Harter, S.P. (1992). Psychological relevance and information science.Journal of the American Society for Information Science, 43, 602–615.

Hert, C.A. (1996). User goals on an online public access catalog. Journalof the American Society for Information Science, 47(7), 504–518.

Ingwersen, P. (1992). Information retrieval interaction. London: TaylorGraham.

Ingwersen, P. (1996). Cognitive perspectives of information retrieval.Journal of Documentation, 52(1), 3–50.

Kuhlthau, C.C. (1991). Inside the search process: Information seeking fromthe user’s perspective. Journal of the American Society for InformationScience, 42(5), 361–371.

Leonard, L.E. (1975). Inter-indexer consistency and retrieval effectiveness:Measurement of relationships. Unpublished doctoral dissertation, Uni-versity of Illinois, Champaign–Urbana.

Mizoguchi, R., Tijerino, Y., & Ikeda, M. (1995). Task analysis interviewbased on task ontology. Expert Systems with Applications, 9(1), 15–25.

Mokros, H., Mullins, L.S., & Saracevic, T. (1995). Practice and person-hood in professional interaction: Social identities and information needs.Library and Information Science Research, 17, 237–257.

Robertson, S.E. (1997). Overview of the OKAPI projects. Journal ofDocumentation, 53(1), 3–7.

Robins, D. (1997). Shifts of focus in information retrieval interaction.ASIS ’97: Proceedings of the 60th ASIS Annual Conference, 34, 123–134.

Robins, D. (1998a). Dynamics and dimensions of user information prob-lems as foci of interaction in information retrieval. ASIS ’98: Proceed-ings of the 61st ASIS Annual Conference, 35, 327–341.

Robins, D. (1998b). Shifts of focus among dimensions of user informationproblems as represented during interactive information retrieval. Unpub-lished doctoral dissertation, University of North Texas.

Saracevic, T. (1997a). The stratified model of information retrieval inter-action: Extension and application. Proceedings of the 60th AnnualMeeting of the American Society for Information Science, 34, 313–327.

Saracevic, T. (1997b). Users lost: Reflections on the past, future, and limitsof information science. SIGIR Forum, 31(2), 16–27.

Saracevic, T., & Kantor, P. (1988). A study of information seeking andretrieving. III. Searchers, searches, and overlap. Journal of the AmericanSociety for Information Science, 39(3), 197–216.

Saracevic, T., Mokros, H. & Su, L. (1990). Nature of interaction betweenusers and intermediaries in online searching: A qualitative analysis.Proceedings of the 53rd annual meeting of the American Society forInformation Science, 27, 47–54.

Saracevic, T., Mokros, H., Su, L., & Spink, A. (1991). Interaction betweenusers and intermediaries in online searching: Proceedings of the 12thNational Online Meeting (pp. 329–341).

Saracevic, T., Spink, A., & Wu, M.M. (1997). Users and intermediaries ininformation retrieval: What are they talking about? In A. Jameson, C.Paris, & C. Tasso (Eds.), User modeling: Proceedings of the sixthinternational conference (UM97; (pp. 43–54). New York: SpringerWien.

Saracevic, T., & Su, L. (1989). Modeling and measuring user-intermedi-ary–computer interaction online searching: Design of a study. Proceed-ings of the 52nd Annual Meeting of the American Society for Informa-tion Science, 26, 75–80.

Schamber, L. Eisenberg, M.B., & Nilan, M.S. (1990). A re-examination ofrelevance: Toward a dynamic, situational definition. Information Pro-cessing & Management, 26, 755–776.

Sinclair, J.M., & Coulthard, R.M. (1975). Toward an analysis of discourse:The English used by teachers and pupils. Oxford, UK: Oxford Univer-sity Press.

Spink, A. (1993). Feedback in information retrieval. Unpublished doctoraldissertation, Rutgers University.

Spink, A. (1997). Study of interactive feedback during mediated informa-tion retrieval. Journal of the American Society for Information Science,48(5), 382–394.

Spink, A., Goodrum, A., & Robins, D. (1998). Elicitation behavior duringmediated information retrieval. Information Processing and Manage-ment, 34(2/3), 257–273.

Spink, A., & Saracevic, T. (1998). Human–computer interaction in infor-mation retrieval: Nature and manifestations of feedback. Interactionswith Computers, 10, 249–267.

Taylor, R.S. (1968). Question-negotiation and information seeking. Col-lege and Research Libraries, 29, 178–194.

Weick, K.E. (1995). Sensemaking in organizations. Thousand Oaks, CA:Sage.

Wersig, G., & Windel, G. (1985). Information science needs a theory of“information actions.” Social Science Information Studies, 5, 11–23.

Wu, M.M. (1993). Information interaction dialog: A study of patronelicitation in the information retrieval interaction. Unpublished doctoraldissertation, Rutgers University.

Xie, H. (1997). Planned and situated aspects in interactive IR: Patterns ofuser interactive intentions and information seeking strategies. Proceed-ings of the 60th Annual Meeting of the American Society for Informa-tion Science, 34, 101–110.

Xie, H. (1998). Planned and situated aspects in interactive IR: Patterns ofuser interactive intentions and information seeking Strategies. Unpub-lished doctoral dissertation, Rutgers University.

Xie, H. (in press). Shifts of interactive intentions and information seekingstrategies in interactive information retrieval. Journal of the AmericanSociety for Information Science.

928 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—August, 2000