Upload
nitesh-goyal
View
115
Download
3
Embed Size (px)
Citation preview
Welcome!
http://www.cs.cornell.edu/~ngoyal
Designing for Online Collaborative Sensemaking
Knowledge Generation from Complex & Private Data
3
Introduc+onWhat is sensemaking ? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
3
Organiza(onScience(Weick,1995)Educa(on&LeaningScience(Schoenfeld,1992)IntelligentSystems(Jacobson,1991;Savolainen,1993)Informa(onSystems(Griffith,1999)Communica(ons(Dervinetal.,2003)
4
Introduc+onWhat is sensemaking ? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
4
Sensemakingistheprocessofsearchingforarepresenta+onandencodingdatainarepresenta+ontoanswertask-specificques+ons.
-Thecoststructureofsensemaking.INTERCHI'93Russell,D.M.,Stefik,M.J.,Pirolli,P.,&Card,S.K.
5
Introduc+onWhat is sensemaking ? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
5
Sensemakingistheprocessofsearchingforarepresenta+onandencodingdatainarepresenta+ontoanswertask-specificques+ons.
-Thecoststructureofsensemaking.INTERCHI'93Russell,D.M.,Stefik,M.J.,Pirolli,P.,&Card,S.K.
6
Introduc+onWhat is sensemaking ? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
6
Sensemakerschangerepresenta+onseithertoreducethe+metakentoperformthetaskortoimproveacostvs.qualitytradeoff.
-Thecoststructureofsensemaking.INTERCHI'93Russell,D.M.,Stefik,M.J.,Pirolli,P.,&Card,S.K.
7
Introduc+onTools for sensemaking Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
7
Sensemakerschangerepresenta+onseithertoreducethe+metakentoperformthetaskortoimproveacostvs.qualitytradeoff–Toolsdonotaffordsuchtradeoffs.-EffectsofVisualiza+on¬e-takingonSensemaking,Goyal,N.,Leshed,G.,Fussell,S.R.ACMCHI2013.
which, in turn, may lead to a project and another name.Discovering, prioritizing, and keeping track of a growingset of entities is one of the difficulties of analysis. EntityWorkspace helps by allowing the analyst to extract entitiesfrom text rapidly and to assemble these together into higherlevel statements. Entities are a natural information unit touse in collaboration, so we have extended our entity-basedtools, originally designed to support the individual analyst,to support collaborative situations as well.
As a result of this work, we have developed a set ofdesign guidelines for collaborative analysis tools as follows:
1. Entities. Analysts think of people, places, things, andtheir relationships. Make it easy to view andmanipulate information at the level of entities(people, places, and things) and their relationshipsrather than just text. Create features that reduce thecost of processing entities both for the writer and forthe reader of them.
2. Collapsing Information. Analysts sometimes study afew broad topics and sometimes a single narrowtopic. The space available for manipulating informa-tion is extremely limited physically, perceptually,and cognitively. Make it possible to collapse in-formation so that it is still visible, but takes up lessspace. Make it easy to find and refind all mentions ofa topic, even in collapsed information. Support bothtop-down and bottom-up processes.
3. Organizing by Stories. Analysts ultimately “tell stor-ies” in their presentations. Provide a way to organizeevidence by events and source documents so that thestory behind the evidence can be represented.
4. Quiet Collaborative Entity Recommendations. Too muchdata on the screen are confusing. “Help” mustalways be available for the analyst but should onlybe salient when needed. When notifying analysts ofthe work of other experts, use subtle visual cues inthe analytic tools themselves, so the analysts are notinterrupted and see the cues at the time when theyare most relevant.
5. Sharing. Reaching a consensus on an analytical taskvirtually always requires all the analysts involved tobe confident that they have read all the supportingreports or documents. Support the sharing ofdocument collections.
Based on these guidelines, we built a set of novelcollaboration features into our prototype analytic tools. Wetested the first prototype on a group of four intelligenceanalysts at a three-day formative evaluation at NIST inMarch 2007. The results of that evaluation were verypositive, particularly for support of collaboration. Based onsuggestions from that evaluation, we extended our colla-boration facilities further. The resulting prototype is nowbeing tested in a longer term trial by a group of analysts atSAIC. Early results from that evaluation are promising.
In the remainder of the paper, we describe the features ofCorpusView and Entity Workspace that support our designguidelines. Then, we describe what we have learned aboutcollaboration tools for analysis from our two evaluations.Then, we discuss our design principles further and speculateon how they may support other forms of collaboration.
2 COLLABORATION TECHNOLOGIES
2.1 Entities in Notebooks
In this section, we describe our extended versions ofCorpusView and Entity Workspace, focusing on the fivedesign guidelines for collaboration that we described above.
Analysts think of people, places, things, and their relation-ships. Our first guideline relates to the granularity atwhich information is viewed and manipulated. Considerthis sentence:
Oluf Zabani is the owner of the Persian Design Carpets shopin Meszi, Ugakostan.
If analyst A saw this sentence in a text-based notebookcreated by analyst B, the work of understanding and usingthe information in the sentence would fall almost exclu-sively on A. Based on capitalization and other cues, A could
BIER ET AL.: PRINCIPLES AND TOOLS FOR COLLABORATIVE ENTITY-BASED INTELLIGENCE ANALYSIS 179
Fig. 1. The Think Loop Model of all-source analysis.
Fig. 2. (a) CorpusView and (b) Entity Workspace.
Authorized licensed use limited to: Palo Alto Research Center. Downloaded on February 4, 2010 at 18:27 from IEEE Xplore. Restrictions apply.
8
Introduc+onWhat is collaborative sensemaking ? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
8
Collabora+vesensemakingextendsbeyondthecrea+onofindividualunderstandingsofinforma+ontothecrea+onofasharedunderstandingofinforma+onfromtheinterac+onbetweenindividuals.
9
Introduc+onUnregulated Crowds Fail Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
10
Introduc+onBy Focusing on Local Maxima Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
11
Introduc+onLeverage Partners’ Insights Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
11
Tofindtheglobalmaxima
12
Introduc+onWhy research collaborative sensemaking ? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
FederalAgenciesknewoftheimpendingaaackbutdidnotcommunicatewitheachother,failingtoconnectthedots./Collabora+onFailure
-9/11Inves+ga+onReport
13
Introduc+on
Ideally
BorderPatrolAgent
Sharing is Tricky Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
14
Introduc+on
Reality
Sharing is Tricky Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
BorderPatrolAgent
15
Introduc+onDesigning for Implicit Sharing
NoImplicitSharing
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
16
Introduc+onDesigning for Implicit Sharing
NoImplicitSharingThink&Write
Mo+va+on|Landscape|Contribu+on|Hypothesis|Experiment|Results|TakeAwayMo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
17
Introduc+onDesigning for Implicit Sharing
NoImplicitSharingThink&WriteOvercomechallenges
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
18
Introduc+onDesigning for Implicit Sharing
NoImplicitSharingThink&WriteOvercomechallengesShare
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
19
Introduc+onDesigning for Implicit Sharing
NoImplicitSharingThink&WriteOvercomechallengesShareImplicitSharing
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
20
Introduc+onDesigning for Implicit Sharing
NoImplicitSharingThink&WriteOvercomechallengesShareImplicitSharingThink&Write
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
21
Introduc+onDesigning for Implicit Sharing
NoImplicitSharingThink&WriteOvercomechallengesShareImplicitSharingThink&WriteShare
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
22
Introduc+onExpectations from Implicit sharing Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
H1.ImplicitsharingofnoteswillimproveTaskPerformanceH2a.ImplicitSharingofnoteswillberatedasmoreusefulthannon-implicitsharing.H2b.ImplicitSharingofnoteswillresultinincreasedusageofcollabora(vefeaturesthannon-implicitsharing.
23
Introduc+onResearch Questions Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
RQ1.Howwillimplicitsharingofnotesaffectpar(cipants’cogni(veworkload?RQ2:Howwilltheavailabilityofimplicitsharingaffecttheamountofinforma(onexchangedviaexplicitchannels?
24
Introduc+onSAVANT Prototype
hypotheses [23]. These annotations are represented as digi-tal Stickies (shaped as a Post-It note, a familiar metaphor preferred by analysts [7]), which as in other analysis tools [19, 23, 32, 40] are linked to the original document, map location, or timeline point where they were created. Stickies can also be created directly in the Analysis Space, uncon-nected to specific documents. Analysts can move Stickies around, connect Stickies together, or stack them in piles.
The Analysis Space supports collaboration through both explicit and implicit information sharing. For explicit shar-ing, a chat box at the bottom-left corner allows analysts to discuss their cases, data, and insights and to ask and answer questions. For implicit sharing, the Analysis Space shows other analysts’ Stickies in real time as they are created and organized in the space. Stickies are color coded by the ana-lyst who created them, but anyone can move, connect, or pile anyone’s Stickies. Mouse cursors are independent of each other, while dependencies between Stickies are han-dled by the server on a first-come-first-serve basis. The server updates the interface every second.
We created two versions of SAVANT for this study. In the implicit sharing condition, Stickies in the Analysis Space are automatically shared as described above: there is no private workspace for analysis, only a public one. In the no implicit sharing condition, partners only see their own Stickies in the Analysis Space: there is no public work-space, only private ones for each analyst. The chat box is available in both conditions to support explicit sharing.
Participants Participants consisted of 68 students at a large northeastern US university (22 female, 46 male; 85% U.S. born). Partic-ipants were assigned randomly into pairs, and each pair was randomly assigned to either the implicit sharing or the no implicit sharing condition.
Materials The experimental materials were adapted from Balakrish-nan et al. [1] and consisted of a set of practice materials and a primary task. A practice booklet with a set of practice crime case documents introduced participants to the process of crime analysis and highlighted the importance of looking for motive, opportunity, and the lack of alibi. The primary task was created to be reasonable, but difficult, for novice analysts to complete in a limited time. The main task mate-rials were a set of fictional homicide cases. There were six cold (unresolved) cases, and one current (active) case. Each of the cold cases included a single document with a sum-mary of the crime: victim, time, method, and witness inter-views. Four of these six cold cases were “serial killer” cas-es. These four had a similar crime pattern (e.g., killed by a blunt instrument). The active case consisted of nine docu-ments: a cover sheet, coroner’s report, and witness and sus-pect interviews. Additional documents included three bus route timetables and a police department organization chart.
The documents were available through the SAVANT doc-ument library and were split between the two participants such that each had access to 3 cold cases (2 serial killer
Figure 2. The Analysis Space showing Stickies that are implicitly shared between analysts (color-coded by user), connections between
Stickies via arrows, and piles of multiple Stickies. Explicit sharing is supported via the chat box at the bottom left.
setting, increased common ground was associated with higher perceptions of the team process [12]. If implicit shar-ing mitigates barriers of collaborating on a task, then indi-viduals should have a better collaboration experience, com-pared to when implicit sharing is not available:
H3. Participants using implicit sharing of notes will rate their team experience higher than participants without im-plicit sharing of notes.
By changing the amount and type of information available, implicit sharing may affect the mental demand of the crime-solving task. On the one hand, a shared workspace may reduce the time and effort put in the analysis task compared to working alone [15]. Further, implicit sharing helps estab-lish common ground [39], and thus might reduce analysts’ need to explicitly formulate messages and communicate information, thereby reducing their workload. On the other hand, shared workspaces might increase communication costs [19]. Seeing partners’ activity might divert attention from one’s own thoughts and increase the need for explicit discussion of process and data, especially when shared in-sights are connected to unshared data [13]. Since the direc-tion of impact is unclear, we pose two research questions:
RQ1. How will implicit sharing of notes affect participants’ cognitive workload?
RQ2: How will the availability of implicit sharing affect the amount of information exchanged via explicit channels?
METHOD We designed a laboratory experiment in which two-person teams attempted to solve a set of crimes in a simulated geo-graphically distributed environment. The crime cases were distributed between the partners, with a hidden serial killer that had to be identified. Half of the pairs worked on the task using an interface that provided implicit sharing of notes. The other half worked on the task using an interface without implicit sharing. We collected data via post-task surveys, participant reports, and computer logs.
Research Prototype Tool We adapted Goyal et al.’s SAVANT system [17] for use in the current study. SAVANT has two main components, the Document Space (Figure 1) and the Analysis Space (Figure 2). The Document Space has a number of features for data exploration and discovery. A document library and a reader pane are for viewing and reading crime case reports, wit-ness reports, testimonials, and other documents. A network diagram visualizes connections between documents based on commonly identified entities like persons, locations, and weapon types. The Document Space also provides a map of the area where crimes and events were reported and a time-line to assist in tracking events over time. Users can high-light and create annotations in the text of documents, loca-tions on the map, and events in the timeline.
Such annotations automatically appear in the Analysis Space, an area for analysts to iteratively make and reorgan-ize their notes until they see emerging patterns that lead to
Figure 1. The Document Space showing (clockwise, from top-left) the directory of crime case documents, a tabbed reader pane for
reading case documents, a visual graph of connections based on common entities in the dataset, a map to identify locations of crimes and events, and a timeline to track events.
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
DocumentSpace AnalysisSpace
25
Introduc+onSAVANT Prototype
Directory
Timeline
Diagram
Map
Document
Ithinkthismustbeinvestogatedfurther
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
DocumentSpace
26
Introduc+onSAVANT Prototype
Chat
S+ckyNoteIthinkthismustbeinvestogatedfurther
AnalysisSpace
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
27
Introduc+onSAVANT Prototype
Chat
S+ckyNote
Pile
Connec+on
Ithinkthismustbeinvestogatedfurther
AnalysisSpace
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
28
Introduc+onSAVANT Prototype
ExplicitSharing
ImplicitSharing
Ithinkthismustbeinvestogatedfurther
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
29
Introduc+onTesting Implicit sharing Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
30
Introduc+onDesign 64
Par(cipants,34
Teams
2Condi+ons:1.NoImplicit
Sharing2.WithImplicitSharing
Prac+ceSession
Findthenameofserialkillerin60minutes
Self-WriaenReport(Arrests,Clues)
Survey
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
31
Introduc+onDesign
64Par(cipants,34Teams
2Condi+ons:1.NoImplicit
Sharing2.WithImplicit
Sharing
Prac+ceSession
Findthenameof
serialkillerin60minutes
Self-WriaenReport(Arrests,Clues)
Survey
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
32
Introduc+onDesign
64Par(cipants,34Teams
2Condi+ons:1.NoImplicit
Sharing2.WithImplicitSharing
Prac(ceSession
Findthenameof
serialkillerin60minutes
Self-WriaenReport(Arrests,Clues)
Survey
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
33
Introduc+onDesign
64Par(cipants,34Teams
2Condi+ons:1.NoImplicit
Sharing2.WithImplicitSharing
Prac+ceSession
Findthenameof
serialkillerin60
minutes
Self-WriaenReport(Arrests,Clues)
Survey
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
34
Introduc+onDesign
64Par(cipants,34Teams
2Condi+ons:1.NoImplicit
Sharing2.WithImplicitSharing
Prac+ceSession
Findthenameof
serialkillerin60minutes
Self-Wri`enReport(Arrests,Clues)
Survey
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
35
Introduc+onDesign
64Par(cipants,34Teams
2Condi+ons:1.NoImplicit
Sharing2.WithImplicitSharing
Prac+ceSession
Findthenameof
serialkillerin60minutes
Self-WriaenReport(Arrests,Clues)
Survey
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
36
Introduc+onMaterials 7CrimeCasesAdaptedfrom:Goyaletal,CHI2013
Facts
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
37
Introduc+onMaterials 7CrimeCasesAdaptedfrom:Goyaletal,CHI2013
Interviews
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
38
Introduc+onMaterials 20CaseDocuments
Interviews
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
39
Introduc+onMaterials Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
20CaseDocuments
Partner1SharedPartner2
40
Introduc+onMaterials 40Suspects
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
41
Introduc+onMaterials 1SerialKiller
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
42
Introduc+onImplicit Sharing helps collaborative analysis? Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
43
Introduc+onH1. Task Performance
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
ClueRecall,p=0.01ClueRecogni(on,p=0.06NoSignificantdifferenceinSerialKillerIden(fica(on
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
44
Introduc+onH2. User Experience
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
S(ckyU(lity,p<0.001AnalysisSpaceu(lity,p<0.001Combina(onofChannelsratedHigher
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
45
Introduc+onH2. User Experience
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
~2xConnec(ons~3xPiles~2xManipula(onsVisibilityincreasedadop(on
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
pants’ actions in a session.
Team Experience The post-task survey contained ten survey questions about the quality of the collaboration (e.g., “It was easy to discuss the cases with my partner,” “My partner and I agreed about how to solve the case”). These ten questions formed a relia-ble scale (Cronbach’s α=0.84) and were averaged to create a team experience score, to answer H3. This measure is similar to [12,13] who used a post-task questionnaire to assess quality of communication within the group.
Cognitive Load In order to answer RQ1, the post-task survey contained five questions based on the NASA TLX [18] that asked partici-pants to rate how mentally demanding, temporally demand-ing, effortful, and frustrating the task was, as well as their subjective performance. After inverting the performance question, these five responses formed a reliable scale (Cronbach’s α=0.76). Participants’ responses were averaged to create one measure of cognitive load.
Explicit Sharing SAVANT logged the chat transcripts for each session, which were then cleaned to remove extraneous information like participant identification and timestamps. To answer RQ2, explicit sharing was measured at the pair level as the number of words exchanged in the chat box during a ses-sion. This is similar to [19] who assessed the number of chat lines exchanged during the experimental session.
RESULTS We present our findings in six sections. First, we discuss the effects of implicit sharing on our three task performance measures. We then consider how it affected subjective rat-ings of SAVANT features, actual use of SAVANT features, perceptions of team experience, cognitive load, and explicit information sharing.
Task Performance H1 proposed that pairs would perform better when implicit sharing was available than when it was not available. To test this hypothesis, we conducted mixed model ANOVAs, using clue recall and clue recognition as our dependent measures. In these models, participant nested within pair
was a random factor and condition (implicit vs. no implicit sharing) was a fixed factor.
Clue recall. There was a significant effect of implicit vs. no implicit sharing on the number of clues participants recalled in the written report (F[1, 66]=6.54, p=0.01). As shown on the left side of Figure 3a, participants in the implicit sharing condition recalled more clues (M=3.47, SE=0.37) than those without implicit sharing (M=2.11, SE=0.37). Clue recall difference was also significant at the team level (t[32]=2.03, p=0.05), with teams with implicit sharing re-calling more clues (M=4.71, SE=0.58) than teams without implicit sharing (M=3.11, SE=0.52). Given the large Co-hen’s d (3.68 for individuals, 2.90 for teams) and the fact that these clues were buried in 20 documents with many information pieces, we regard this as a meaningful increase in clue recall, paralleling other work that has found increas-es in task-relevant recall in shared workspaces [13, 26].
Clue recognition. The right-hand side of Figure 3a shows participants’ performance on the multiple choice clue recognition questions in the post-task survey. There were no statistically significant differences in clue recognition (F[1, 66]=3.52, p=0.06) between individuals in the implicit sharing (M=3.20, SE=0.28) and no implicit sharing condi-tions (M=2.44, SE=0.28). This was also consistent at the team level (t[32]=0.80, p=0.42) between teams with implic-it sharing (M=5.35, SE=0.41) and no implicit sharing (M=4.82, SE=0.51).
Solving the case. We also examined whether interface con-dition affected the likelihood that participants could solve the crime. Since solving the case was a binary dependent variable, we ran a binomial logistic regression with condi-tion as the independent variable and pair as the random effect variable. There was no significant difference between the implicit sharing condition (M=0.62, SE=0.11) and the no implicit sharing condition (M=0.74, SE=0.01; Wald Chi Square [1, 68]=0.57, p=0.45). Sharing subset of knowledge manually did not improve answer accuracy in [13] but shar-ing knowledge implicitly in a small experiment did increase answer accuracy in [19].
Perception of Usefulness of SAVANT features a
b
c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles made in a session, each by interface condition. Error bars represent standard errors of the mean.
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
46
Introduc+onResearch Questions
RQ1.DidCogni(veLoadincreasewithImplicitSharing?
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
47
Introduc+onResearch Questions
RQ1.NoincreaseinCogni(veLoad
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
48
Introduc+onResearch Questions
RQ1.NoincreaseinCogni(veLoadBalance
-HigherInforma(on-LesserEfforttoshare
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
49
Introduc+onResearch Questions
RQ2.DidExplicitSharingdecreasewithImplicitSharing?
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
50
Introduc+onResearch Questions
RQ2.NodecreaseinExplicitSharing
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
51
Introduc+onResearch Questions
RQ2.NodecreaseinExplicitSharingPrimaryvs.SecondaryChannelExplicitsharing:DrivingforceImplicitsharing:Aid
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
52
Introduc+onImplicit Sharing & Explicit Sharing
“Thechatwaseasilythemosthelpfulbecauseitallowedustocommunicateandtelleachotherspecificsaboutthecase.TheS+ckieswereveryusefulalsobecausetheyallowedustomakeconnec+onsbetweentheinforma+onwebothhadindependentof talkingwitheachother. [S(ckies]allowedustoworkmoreefficientlythanwas(ngbothofour(me.”
(P8,Male)
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
53
Introduc+onImplicit Sharing & Explicit Sharing
“IusedtheS(ckiesasjumpingoffpointsforconversa(onswithmypartner - Iwould seeher S+ckyand thenaskher tofill insome details that she may have skipped over since she hadaccesstocertaindocumentsthatIdidnot.”
(P15,Female)
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
54
Introduc+onStickies as Visual Metaphor
“TheS+ckiesenabledaconnec+onbetweenmypartnerandI,we could see each other’s train of thoughts and methods oforganiza+on. I used the connec(ng lines for the S(ckies toshow myself and my partner the connec(ons that I wasseeing.”
(P27,Female)
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
55
Introduc+onAppropriation of Stickies
“Isimplypiledthemtogetherandplacedtheminstrategicposi+ons.Weusedtwos+ckiessome+mesforthesamecase.Eachs+ckywouldhaveanothersideofthecaselikeemo(onalandtheotherwouldbefactual.”
(P61,Female).
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
56
Introduc+on
EnableImplicitSharingaschannelforsupporttoexplicitcommunica(onchanneltoovercomelimita(onsofexplicitsharingUseTechnologieslikeNaturalLanguageProcessingtoparseimplicitsharingintocategorieswhendatascalesuptodecidewhenandhowtoimplicitlysharewhat.
Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
Designing for Implicit Sharing
57
Introduc+onInter-Organization Sharing is Tricky Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
UX Research Contribution
5757
InsteadofTes+ngonewholeBlackBox,oneshouldtesteachfeatureseparately-EffectsofVisualiza+on¬e-takingonSensemaking,Goyal,N.,Leshed,G.,Fussell,S.R.
ACMCHI2013
58
Introduc+onInter-Organization Sharing is Tricky Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
Sensemaking Contribution
5858
NLP+UsergeneratedInsightsincloud=>learnconnec+ons+createrecommenda+on-EffectsofImplicitSharingonCollabora+veSensemaking,GoyalN.,Leshed,G.,Cosley,D.,Fussell,S.R.,
ACMCHI2014
59
Introduc+onInter-Organization Sharing is Tricky Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
Theoretical Contribution
595959
Butbewareofautomatedsharingandrecommenda+ons.TunnelVision*-Weick,K.1995.*SensemakinginOrganisa+ons.London:Sage;EffectsofImplicitSharingon
Collabora+veSensemaking,GoyalN.,Leshed,G.,Cosley,D.,Fussell,S.R.,ACMCHI2014
60
Introduc+onFuture Designs Mo+va+on|Contribu+on|Hypothesis|Experiment|Results|TakeAway
606060
Visualiza+onofTunnelVision+CrowdsourcingforExplicitDisconfirma+on
Research Interests
E-Waste
IEEE ICECE 2007
Mobile
Collaboration
ECSCW 2009 INTERACT 2009
Language Learning
UNO/ACM ICTD 2010
Collaborative Sensemaking
ACM CHI 2013, CHI 2014; CSCW
2013, CSCW 2016
62
Introduc+onThanks!
ResearchAssistantsWeiXinYuan,EricSwidler,AndreAnderson
AssociatedFaculty:
GillyLeshed,DanCosley,SueFussell
FundingAgencyNa(onalScienceFounda(on#IIS-0968450.
NiteshGoyal,GillyLeshed,DanCosley,SusanFussell:EffectsofImplicitSharingonCollabora+veAnalysis
Mo+va+on|Landscape|Contribu+on|Hypothesis|Experiment|Results|TakeAway