6
Recent activity growth

Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

Recent activity growth

Page 2: Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

From all over the world

Page 3: Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

Slack activity is now sustained

1st call 2nd call

3rd call

Page 4: Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

community building

prototyping

design

science outreach

electronics

biology

biotechnology

neuroscience

python

research

chemistry

data science

data analysis open science

citizen science

citizen participation

citizen engagement

modeling skills

writing skills

computational biology

synthetic biology

matlab

open data

software development

javascriptc

communication

social media

product owner

molecular & cellular biology

writing

mathematics education

mathematics

neuroimaging

cognitive neuroscience

c++

java

management

marketing

molecular genetics

web development

microfluidics

molecular virology

bioinformaticsmolecular biology

cancer informatics

metagenomics

crowdsourcing

big data

project management

statistics

artificial intelligence

r

design thinking

back-end programming

machine learning

sql

python (beginner)

agronomy

data visualisation

c#

biochemistry

microbiology

science communication

education

scrum

gestion de projet

anglais

philosophy

epistemology

entreprenurship

scientific writing

cloning

data engineering

entrepreneurial spirit

open source software

product developmentproduct management

product design

sustainability

phylogenetics

spanish-speaking

english speaking

project management skills

node.js

english

french

bioart

html

software architecture

linux

photography

structural biology

genetics

microscopy

art

pharmacology

front-end web development

git

biotechnologies

iot

embedded systems

cad

3d printing

arduino

pandas

android

Page 5: Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

A metastudy!

task (correct, error, inserted error, and correctederror) to allow typists to distinguish sources of errorsand correct responses and, therefore, provide astronger test of illusions of authorship. We asked 24skilled typists (WPM = 70.7 T 16.4) to type 600words, each of which was followed by a four-alternative explicit report screen. Typists typed91.8% of the words correctly. Mean interkeystrokeintervals, plotted in Fig. 3A, show post-error slow-ing for incorrect responses (F1,138 = 117.7,p< 0.01)and corrected errors (F1,138 = 120.0, p< 0.01), butnot for inserted errors (F < 1.0), indicating thatinner-loop detection distinguishes between actualerrors and correct responses.

Explicit detection probabilities, plotted in Fig.3B, show good discrimination between correctand error responses. For correct responses, typistssaid “correct”more than “error” [t(23) = 97.29,p< 0.01]; for error responses, typists said “error”more than “correct” [t(23) = 8.22, p< 0.01]. Typ-ists distinguished actual errors from inserted errorswell, avoiding an illusion of authorship. Theysaid “error”more than “inserted” for actual errors[t(23) = 7.06, p< 0.01] and “inserted”more than“error” for inserted errors [t(23) = 14.75, p <0.01]. However, typists showed a strong illusionof authorship with corrected errors. They werejust as likely to call them correct responses ascorrected errors [t(23) = 1.38].

The post-error slowing and post-trial reportdata show a dissociation between inner- and outer-loop error detection.We assessed the dissociationfurther by comparing post-error slowing on trialsin which typists did and did not experienceillusions of authorship (21). The pattern of post-error slowing was the same for both sets of trials(fig. S6), suggesting that the pattern in Fig. 3A isrepresentative of all trials.

The three experiments found strong dissocia-tions between explicit error reports and post-errorslowing. These dissociations are consistent withthe hierarchical error-detectionmechanism that weproposed, with an outer loop that mediates ex-plicit reports and an inner loop that mediates post-error slowing. This nested-loop description of errordetection is consistent with hierarchical modelsof cognitive control in typewriting (9, 10, 15–17)and with models of hierarchical control in othercomplex tasks (2, 8, 22). Speaking, playingmusic,and navigating through space may all involveinner loops that take care of the details of per-formance (e.g., uttering phonemes, playing notes,and walking) and outer loops that ensure that in-tentions are fulfilled (e.g., messages communi-cated, songs performed, and destinations reached).Hierarchical control may be prevalent in highlyskilled performers who have had enough practiceto develop an autonomous inner loop. Previousstudies of error detection in simple tasks maydescribe inner-loop processing. The novel con-tribution of our research is to dissociate the outerloop from the inner loop.

The three experiments demonstrate cogni-tive illusions of authorship in skilled typewriting(11–14). Typists readily take credit for correctoutput on the screen, interpreting corrected errorsas their own correct responses. They take theblame for inserted errors, as in the first and sec-ond experiments, but they also blame the com-puter, as in the third experiment. These illusionsare consistent with the hierarchical model of errordetection, with the outer loop assigning creditand blame and the inner loop doing the work oftyping (10, 17). Thus, illusions of authorshipmay be a hallmark of hierarchical control systems(2, 11, 22, 23).

References and Notes1. P. M. A. Rabbitt, J. Exp. Psychol. 71, 264 (1966).2. D. A. Norman, Psychol. Rev. 88, 1 (1981).3. C. B. Holroyd, M. G. H. Coles, Psychol. Rev. 109, 679 (2002).4. N. Yeung, M. M. Botvinick, J. D. Cohen, Psychol. Rev.

111, 931 (2004).5. W. J. Gehring, B. Goss, M. G. H. Coles, D. E. Meyer,

E. Donchin, Psychol. Sci. 4, 385 (1993).6. S. Dehaene, M. I. Posner, D. M. Tucker, Psychol. Sci. 5,

303 (1994).7. C. S. Carter et al., Science 280, 747 (1998).8. K. S. Lashley, in Cerebral Mechanisms in Behavior,

L. A. Jeffress, Ed. (Wiley, New York, 1951), pp. 112–136.9. T. A. Salthouse, Psychol. Bull. 99, 303 (1986).

10. G. D. Logan, M. J. C. Crump, Psychol. Sci. 20, 1296(2009).

11. T. I. Nielsen, Scand. J. Psychol. 4, 225 (1963).12. M. M. Botvinick, J. D. Cohen, Nature 391, 756 (1998).13. D. M. Wegner, The Illusion of Conscious Will (MIT Press,

Cambridge, MA, 2002).14. G. Knoblich, T. T. J. Kircher, J. Exp. Psychol. Hum.

Percept. Perform. 30, 657 (2004).15. D. E. Rumelhart, D. A. Norman, Cogn. Sci. 6, 1 (1982).16. L. H. Shaffer, Psychol. Rev. 83, 375 (1976).17. X. Liu, M. J. C. Crump, G. D. Logan, Mem. Cognit. 38,

474 (2010).18. A. M. Gordon, J. F. Soechting, Exp. Brain Res. 107,

281 (1995).19. J. Long, Ergonomics 19, 93 (1976).20. P. Rabbitt, Ergonomics 21, 945 (1978).21. Materials and methods are available as supporting

material on Science Online.22. M. M. Botvinick, Trends Cogn. Sci. 12, 201 (2008).23. R. Cooper, T. Shallice, Cogn. Neuropsychol. 17, 297 (2000).24. We thank J. D. Schall for comments on the manuscript.

This research was supported by grants BCS 0646588and BCS 0957074 from the NSF.

Supporting Online Materialwww.sciencemag.org/cgi/content/full/330/6004/683/DC1Materials and MethodsSOM TextFigs. S1 to S6References

5 April 2010; accepted 13 September 201010.1126/science.1190483

Evidence for a Collective IntelligenceFactor in the Performance ofHuman GroupsAnita Williams Woolley,1* Christopher F. Chabris,2,3 Alex Pentland,3,4Nada Hashmi,3,5 Thomas W. Malone3,5

Psychologists have repeatedly shown that a single statistical factor—often called “generalintelligence”—emerges from the correlations among people’s performance on a wide variety of cognitivetasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists forgroups of people. In two studies with 699 people, working in groups of two to five, we find convergingevidence of a general collective intelligence factor that explains a group’s performance on a wide varietyof tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligenceof group members but is correlated with the average social sensitivity of group members, the equality indistribution of conversational turn-taking, and the proportion of females in the group.

Asresearch, management, and many otherkinds of tasks are increasingly accom-plished by groups—working both face-

to-face and virtually (1–3)—it is becoming evermore important to understand the determinants ofgroup performance. Over the past century,

psychologists made considerable progress indefining and systematically measuring intelli-gence in individuals (4). We have used the sta-tistical approach they developed for individualintelligence to systematically measure the intelli-gence of groups. Even though social psycholo-

gists and others have studied for decades howwell groups perform specific tasks (5, 6), they havenot attempted to measure group intelligence in thesame way individual intelligence is measured—by assessing howwell a single group can performa wide range of different tasks and using thatinformation to predict how that same group willperform other tasks in the future. The goal of theresearch reported here was to test the hypothesisthat groups, like individuals, do have character-istic levels of intelligence, which can bemeasuredand used to predict the groups’ performance on awide variety of tasks.

Although controversy has surrounded it, theconcept ofmeasurable human intelligence is basedon a fact that is still as remarkable as it was toSpearman when he first documented it in 1904

1Carnegie Mellon University, Tepper School of Business, Pitts-burgh, PA 15213, USA. 2Union College, Schenectady, NY12308, USA. 3Massachusetts Institute of Technology (MIT)Center for Collective Intelligence, Cambridge, MA 02142, USA.4MIT Media Lab, Cambridge,MA 02139, USA. 5MIT Sloan Schoolof Management, Cambridge, MA 02142, USA.

*To whom correspondence should be addressed. E-mail:[email protected]

29 OCTOBER 2010 VOL 330 SCIENCE www.sciencemag.org686

REPORTS

on

Janu

ary

11, 2

011

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fro

m

Science 2010

“it would seem to be much easier to raise the intelligence of a group than an individual.

Could a group’s collective intelligence be increased by, for example, better electronic collaboration tools? ”

(7 ): People who dowell on onemental task tend todo well on most others, despite large variations inthe tests’ contents and methods of administration(4 ). In principle, performance on cognitive taskscould be largely uncorrelated, as one might expectif each relied on a specific set of capacities thatwas not used by other tasks (8 ). It could even benegatively correlated, if practicing to improve onetask caused neglect of others (9 ). The empiricalfact of general cognitive ability as first demon-strated by Spearman is now, arguably, the mostreplicated result in all of psychology (4 ).

Evidence of general intelligence comes fromthe observation that the average correlation amongindividuals’ performance scores on a relativelydiverse set of cognitive tasks is positive, the firstfactor extracted in a factor analysis of these scoresgenerally accounts for 30 to 50% of the variance,and subsequent factors extracted account forsubstantially less variance. This first factor extractedin an analysis of individual intelligence tests isreferred to as general cognitive ability, or g, and itis the main factor that intelligence tests measure.What makes intelligence tests of substantial prac-tical (not just theoretical) importance is that in-telligence can be measured in an hour or less,and is a reliable predictor of a very wide rangeof important life outcomes over a long span oftime, including grades in school, success inmanyoccupations, and even life expectancy (4 ).

By analogy with individual intelligence, wedefine a group’s collective intelligence (c) as thegeneral ability of the group to perform a widevariety of tasks. Empirically, collective intelligenceis the inference one draws when the ability of agroup to perform one task is correlated with thatgroup’s ability to perform a wide range of othertasks. This kind of collective intelligence is a prop-erty of the group itself, not just the individuals in it.Unlike previous work that examined the effect ongroup performance of the average intelligence ofindividual groupmembers (10 ), one of our goals isto determine whether the collective intelligence ofthe group as a whole has predictive power aboveand beyond what can be explained by knowingthe abilities of the individual group members.

The first question we examined was whethercollective intelligence—in this sense—even exists.Is there a single factor for groups, a c factor, thatfunctions in the same way for groups as generalintelligence does for individuals? Or does groupperformance, instead, have some other correla-tional structure, such as several equally importantbut independent factors, as is typically found inresearch on individual personality (11)?

To answer this question, we randomly as-signed individuals to groups and asked them toperform a variety of different tasks (12 ). In Study1, 40 three-person groups worked together for upto 5 hours on a diverse set of simple group tasksplus a more complex criterion task. To guide ourtask sampling, we drew tasks from all quadrantsof the McGrath Task Circumplex (6 , 12 ), a well-established taxonomy of group tasks based on thecoordination processes they require. Tasks in-cluded solving visual puzzles, brainstorming,making collective moral judgments, and negoti-ating over limited resources. At the beginning ofeach session, we measured team members’ indi-vidual intelligence. And, as a criterion task at theend of each session, each group played checkersagainst a standardized computer opponent.

The results support the hypothesis that ageneral collective intelligence factor (c) exists ingroups. First, the average inter-item correlationfor group scores on different tasks is positive (r =0.28) (Table 1). Next, factor analysis of teamscores yielded one factor with an initial eigen-value accounting for more than 43% of thevariance (in the middle of the 30 to 50% rangetypical in individual intelligence tests), whereasthe next factor accounted for only 18%. Confir-matory factor analysis supported the fit of asingle latent factor model with the data [c2 =1.66, P = 0.89, df = 5; comparative fit index(CFI) =.99, root mean square error of approxi-mation (RMSEA) = 0.01]. Furthermore, whenthe factor loadings for different tasks on the firstgeneral factor are used to calculate a c score foreach group, this score strongly predicts perform-ance on the criterion task (r = 0.52, P = 0.01).Finally, the average and maximum intelligence

scores of individual group members are notsignificantly correlated with c [r = 0.19, notsignificant (ns); r = 0.27, ns, respectively] andnot predictive of criterion task performance (r =0.18, ns; r = 0.13, ns, respectively). In a regres-sion using both individual intelligence and c topredict performance on the criterion task, c hasa significant effect (b = 0.51, P = 0.001), butaverage individual intelligence (b = 0.08, ns) andmaximum individual intelligence (b =.01, ns) donot (Fig. 1).

In Study 2, we used 152 groups ranging fromtwo to five members. Our goal was to replicatethese findings in groups of different sizes, using abroader sample of tasks and an alternative mea-sure of individual intelligence. As expected, thisstudy replicated the findings of Study 1, yieldinga first factor explaining 44% of the variance and asecond factor explaining only 20%. In addition, aconfirmatory factor analysis suggests an excel-lent fit of the single-factor model with the data(c2 = 5.85, P = 0.32, df = 5; CFI = 0.98, NFI =0.89, RMSEA = 0.03).

In addition, for a subset of the groups in Study2, we included five additional tasks, for a total often. The results from analyses incorporating allten tasks were also consistent with the hypothesisthat a general c factor exists (see Fig. 2). Thescree test (13 ) clearly suggests that a one-factormodel is the best fit for the data from both studies[Akaike Information Criterion (AIC) = 0.00 forsingle-factor solution]. Furthermore, parallel anal-ysis (13 ) suggests that only factors with an eigen-value above 1.38 should be retained, and there isonly one such factor in each sample. These conclu-sions are supported by examining the eigenvaluesboth before and after principal axis extraction,which yields a first factor explaining 31% of

Table 1. Correlations among group tasks and descriptive statistics for Study 1. n = 40 groups; *P ≤0.05; **P ≤ 0.001.

1 2 3 4 5 6 7 8 91 Collective intelligence (c)2 Brainstorming 0.38*3 Group matrix reasoning 0.86** 0.30*4 Group moral reasoning 0.42* 0.12 0.275 Plan shopping trip 0.66** 0.21 0.38* 0.186 Group typing 0.80** 0.13 0.50** 0.25* 0.43*7 Avg member intelligence 0.19 0.11 0.19 0.12 –0.06 0.228 Max member intelligence 0.27 0.09 0.33* 0.05 –0.04 0.28 0.73**9 Video game 0.52* 0.17 0.38* 0.37* 0.39* 0.44* 0.18 0.13Minimum –2.67 9 2 32 –10.80 148 4.00 8.00 26Maximum 1.56 55 17 81 82.40 1169 12.67 15.67 96Mean 0 28.33 11.05 57.35 46.92 596.13 8.92 11.67 61.80SD 1.00 11.36 3.02 10.96 19.64 263.74 1.82 1.69 17.56

Fig. 1. Standardized regression coefficients forcollective intelligence (c) and average individualmember intelligence when both are regressed to-gether on criterion task performance in Studies1 and 2 (controlling for group size in Study 2).Coefficient for maximum member intelligence isalso shown for comparison, calculated in a separateregression because it is too highly correlated withindividual member intelligence to incorporate bothin a single analysis (r = 0.73 and 0.62 in Studies1 and 2, respectively). Error bars, mean T SE.

www.sciencemag.org SCIENCE VOL 330 29 OCTOBER 2010 687

REPORTS

on

Janu

ary

11, 2

011

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fro

m

Page 6: Recent activity growthMar 18, 2020  · A metastudy! task (correct, error, inserted error, and corrected error)toallowtypiststodistinguishsourcesoferrors and correct responses and,

What we need:mixed method quali/quanti

—> questionnaires ==> why do people come?

==> do people easily find how to engage? ==> what do they need? —> gathering data ==> slack API ==> JOGL API —> create a dashboard for viz ==> bokeh react for data viz —> improve recommendation/notification system

==> data science expert - Algolia

➡UX design, data science, data visualisation