37
Bram Zandbelt [email protected] @bbzandbelt Download at: http://www.slideshare.net/bramzandbelt/open-science-and-the-individual-researcher Open science and the individual researcher: one paper and three debates Bram Zandbelt, 2017

Open science and the individual researcher

Embed Size (px)

Citation preview

Bram Zandbelt

[email protected]

@bbzandbelt

Download at: http://www.slideshare.net/bramzandbelt/open-science-and-the-individual-researcher

Open science and the individual researcher:one paper and three debates

Bram Zandbelt, 2017

POINT OF VIEW

How open science helpsresearchers succeedAbstractOpen access, open data, open source and other open scholarship practices are growing inpopularity and necessity. However, widespread adoption of these practices has not yet beenachieved. One reason is that researchers are uncertain about how sharing their work will affect theircareers. We review literature demonstrating that open research is associated with increases incitations, media attention, potential collaborators, job opportunities and funding opportunities.These findings are evidence that open research practices bring significant benefits to researchersrelative to more traditional closed practices.DOI: 10.7554/eLife.16800.001

ERIN C MCKIERNAN*, PHILIP E BOURNE, C TITUS BROWN, STUART BUCK,AMYE KENALL, JENNIFER LIN, DAMON MCDOUGALL, BRIAN A NOSEK,KARTHIK RAM, COURTNEY K SODERBERG, JEFFREY R SPIES, KAITLIN THANEY,ANDREW UPDEGROVE, KARA H WOO AND TAL YARKONI

IntroductionRecognition and adoption of open researchpractices is growing, including new policies thatincrease public access to the academic literature(open access; Bjork et al., 2014; Swan et al.,

2015) and encourage sharing of data (opendata; Heimstadt et al., 2014; Michener, 2015;Stodden et al., 2013), and code (opensource; Stodden et al., 2013; Shamir et al.,

2013). Such policies are often motivated by ethi-cal, moral or utilitarian arguments (Suber, 2012;Willinsky, 2006), such as the right of taxpayersto access literature arising from publicly-fundedresearch (Suber, 2003), or the importance ofpublic software and data deposition for repro-ducibility (Poline et al., 2012; Stodden, 2011;Ince et al., 2012). Meritorious as such argu-ments may be, however, they do not addressthe practical barriers involved in changingresearchers’ behavior, such as the common per-ception that open practices could present a riskto career advancement. In the present article,we address such concerns and suggest that thebenefits of open practices outweigh the poten-tial costs.

We take a researcher-centric approach in out-lining the benefits of open research practices.Researchers can use open practices to their

advantage to gain more citations, media atten-tion, potential collaborators, job opportunitiesand funding opportunities. We address commonmyths about open research, such as concernsabout the rigor of peer review at open accessjournals, risks to funding and career advance-ment, and forfeiture of author rights. We recog-nize the current pressures on researchers, andoffer advice on how to practice open sciencewithin the existing framework of academic evalu-ations and incentives. We discuss these issueswith regard to four areas – publishing, funding,resource management and sharing, and careeradvancement – and conclude with a discussionof open questions.

Publishing

Open publications get more citationsThere is evidence that publishing openly is asso-ciated with higher citation rates (Hitch-cock, 2016). For example, Eysenbach reportedthat articles published in the Proceedings of theNational Academy of Sciences (PNAS) undertheir open access (OA) option were twice aslikely to be cited within 4–10 months and nearlythree times as likely to be cited 10–16 monthsafter publication than non-OA articles published

*For correspondence:

[email protected]

Reviewing editor: Peter

Rodgers, eLife, United Kingdom

Copyright McKiernan et al.

This article is distributed under

the terms of the Creative

Commons Attribution License,

which permits unrestricted use

and redistribution provided that

the original author and source are

credited.

McKiernan et al. eLife 2016;5:e16800. DOI: 10.7554/eLife.16800 1 of 19

FEATURE ARTICLE

Bram Zandbelt, 2017

One paper

Overview lab meeting

1. Engaging in open science will boost my career

2. Engaging in open science will accelerate my research

3. Engaging in open science will improve the quality of my research

Proponent Opponent

vs.

Proponent Opponent

vs.

Proponent Opponent

vs.

Three debates

Motivation for discussing this paper and having these debates

Bram Zandbelt, 2017

Four focus areas

Bram Zandbelt, 2017

Publishing Funding

CareerCode & data sharing

Take home message:Open science has many benefits for individual researchers

Bram Zandbelt, 2017

Open publishing… comes with greater visibility… lets you retain author rights and control reuse of materials… enables you to benefit from transparent peer review… can be achieved in many ways… can be low-cost and even free of charge

Practicing open science… provides you with new funding opportunities… enables you to comply with funder mandates

Code and data sharing… facilitates reuse and responding to requests… promotes good research practices and reduces errors… is beginning to be acknowledged… comes with greater visibility… helps you increase your research output

Practicing open science… provides new opportunities for scientific collaboration… is increasingly valued and mandated by institutions

Bram Zandbelt, 2017

Publishing Funding

CareerCode & data sharing

Open publishing comes with greater visibility

more citations

more page views

more social media posts

Bram Zandbelt, 2017Sources: McKiernan et al., eLife, 2016; Wang et al., Scientometrics, 2015;

Open publishing lets you retain author rights and control reuse of materials

On the Role of the Striatum in Response InhibitionBram B. Zandbelt*, Matthijs Vink

Rudolf Magnus Institute of Neuroscience, Department of Psychiatry, University Medical Center Utrecht, Utrecht, The Netherlands

Abstract

Background: Stopping a manual response requires suppression of the primary motor cortex (M1) and has been linked toactivation of the striatum. Here, we test three hypotheses regarding the role of the striatum in stopping: striatum activationduring successful stopping may reflect suppression of M1, anticipation of a stop-signal occurring, or a slower responsebuild-up.

Methodology/Principal Findings: Twenty-four healthy volunteers underwent functional magnetic resonance imaging(fMRI) while performing a stop-signal paradigm, in which anticipation of stopping was manipulated using a visual cueindicating stop-signal probability, with their right hand. We observed activation of the striatum and deactivation of left M1during successful versus unsuccessful stopping. In addition, striatum activation was proportional to the degree of left M1deactivation during successful stopping, implicating the striatum in response suppression. Furthermore, striatum activationincreased as a function of stop-signal probability and was to linked to activation in the supplementary motor complex (SMC)and right inferior frontal cortex (rIFC) during successful stopping, suggesting a role in anticipation of stopping. Finally, trial-to-trial variations in response time did not affect striatum activation.

Conclusions/Significance: The results identify the striatum as a critical node in the neural network associated with stoppingmotor responses. As striatum activation was related to both suppression of M1 and anticipation of a stop-signal occurring,these findings suggest that the striatum is involved in proactive inhibitory control over M1, most likely in interaction withSMC and rIFC.

Citation: Zandbelt BB, Vink M (2010) On the Role of the Striatum in Response Inhibition. PLoS ONE 5(11): e13848. doi:10.1371/journal.pone.0013848

Editor: Antoni Rodriguez-Fornells, University of Barcelona, Spain

Received July 21, 2010; Accepted October 15, 2010; Published November 4, 2010

Copyright: ! 2010 Zandbelt, Vink. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permitsunrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: This work was supported by the Netherlands Organization for Scientific Research (Veni grant to M.V.). The funders had no role in study design, datacollection and analysis, decision to publish, or preparation of the manuscript.

Competing Interests: The authors have declared that no competing interests exist.

* E-mail: [email protected]

Introduction

The ability to stop a response is crucial in everyday life. Thestop-signal paradigm [1] provides a framework for investigatingthe processes underlying stopping. In this paradigm, go-signalsrequiring a response are infrequently followed by a stop-signal,indicating that the planned response should be stopped. Stoppingperformance depends on the outcome of an interactive racebetween a Go process (activated by the go-signal) building up toresponse threshold and a Stop process (activated by the stop-signal)that can inhibit the Go process [2]. The neural correlates of theseGo and Stop processes have been found in the higher motorcenters for eye movements [3,4], and such Go and Stop units arethought to be present in the primary motor cortex (M1) as well [5].

Converging lines of evidence suggest that a fronto-basal ganglianetwork is involved in controlling such Go and Stop units [forreview, see [6]]. The striatum, the main input station of the basalganglia, is considered an important region for stopping. Specifi-cally, functional neuroimaging studies observe increased striatumactivation during successful versus unsuccessful stopping[7,8,9,10,11], when comparing short to long stop-signal reactiontimes [12], and with a parametric increase in stop-signalprobability [7,13]. Meta-analyses of functional neuroimagingstudies of response inhibition confirm that the striatum iscommonly recruited during stopping [14,15,16]. Clinical popula-

tions characterized by striatum dysfunction have stoppingimpairments [13,17,18,19,20]. Finally, striatum lesions causestopping impairments in rats [21].

Three hypotheses have been put forward regarding the meaningof stopping-related activation of the striatum. First, it may reflectsuppression of response-related M1 activation, as striatumactivation and M1 deactivation co-occur with successful stopping[7,8]. Second, it may indicate anticipation of a stop-signaloccurring, given that striatum activation and response delayingin order to improve stopping performance co-occur withincreasing stop-signal probability [7]. Third, it may reflect aslower build-up of the Go process to response threshold, whichwould allow the Stop process sufficient time to cancel the response[8]. We refer to these concepts as the response suppression, stop-signal anticipation, and response build-up hypotheses, respectively.

Here, we investigate the role of the striatum in stopping, testingthe hypotheses outlined above with a novel stop-signal paradigm(Fig. 1), in which stop-signal probability was manipulated using avisual cue. This enabled the measurement of response strategyadjustments in anticipation of stop-signals. Furthermore, toconstrain waiting strategies that may limit the validity of thestop-signal paradigm [22], subjects were required to make timedrather than speeded responses [23]. We tested the hypothesesoutlined above, using fMRI subtraction and psychophysiologicalinteraction (PPI) analyses (Table 1). Specifically, we predict that if

PLoS ONE | www.plosone.org 1 November 2010 | Volume 5 | Issue 11 | e13848

Bram Zandbelt, 2017

Technical Note

Within-subject variation in BOLD-fMRI signal changes acrossrepeated measurements: Quantification and implicationsfor sample size

Bram B. Zandbelt,a,⁎ Thomas E. Gladwin,a Mathijs Raemaekers,c Mariët van Buuren,a

Sebastiaan F. Neggers,a René S. Kahn,a Nick F. Ramsey,b and Matthijs Vinka

aRudolf Magnus Institute of Neuroscience, Department of Psychiatry, University Medical Center Utrecht, Utrecht, The NetherlandsbRudolf Magnus Institute of Neuroscience, Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The NetherlandscHelmholtz Institute, Department of Functional Neurobiology, University of Utrecht, Utrecht, The Netherlands

Received 5 September 2007; revised 8 February 2008; accepted 14 April 2008Available online 24 April 2008

Functional magnetic resonance imaging (fMRI) can be used to detectexperimental effects on brain activity acrossmeasurements. The success ofsuch studies depends on the size of the experimental effect, the reliability ofthe measurements, and the number of subjects. Here, we report on thestability of fMRI measurements and provide sample size estimationsneeded for repeated measurement studies. Stability was quantified interms of the within-subject standard deviation (σw) of BOLD signalchanges across measurements. In contrast to correlation measures ofstability, this statistic does not depend on the between-subjects variance inthe sampled group. Sample sizes required for repeated measurements ofthe same subjects were calculated using this σw . Ten healthy subjectsperformed a motor task on three occasions, separated by one week, whilebeing scanned. In order to exclude training effects on fMRI stability, allsubjects were trained extensively on the task. Task performance, spatialactivation pattern, and group-wise BOLD signal changes were highlystable over sessions. In contrast, we found substantial fluctuations (up tohalf the size of the group mean activation level) in individual activationlevels, both in ROIs and in voxels. Given this large degree of instabilityover sessions, and the fact that the amount of within-subject variationplays a crucial role in determining the success of an fMRI study withrepeated measurements, improving stability is essential. In order to guidefuture studies, sample sizes are provided for a range of experimentaleffects and levels of stability. Obtaining estimates of these latter twovariables is essential for selecting an appropriate number of subjects.© 2008 Elsevier Inc. All rights reserved.

Keywords: fMRI; Reliability; Within-subject variation; Sample size; Motor

Introduction

The effect of an intervention, for example pharmacologicaltreatment or repetitive transcranial magnetic stimulation (rTMS),can be investigated with repeated measurements on the samesubjects. By administering experimental and control treatment inrandom order to the same group of subjects, the mean differencebetween treatment conditions can be calculated and tested forstatistical significance. Recently, this type of study (i.e. crossoverdesign) has been applied to functional MRI (fMRI). For instance,fMRI signal changes were observed in the motor cortex of patientsrecovering from stroke after treatment with fluoxetine (Parienteet al., 2001), in the amygdala following oxytocin administration(Kirsch et al., 2005), and in the prefrontal cortex in response to acatecholamine-O-methyltransferase inhibitor (Apud et al., 2007).The success of such a design depends on statistical power, which inturn depends on (a) the difference between experimental and controltreatment, (b) measurement error, and (c) sample size. For single-session fMRI studies, the effect of measurement error on statisticalpower and sample size has been determined (Desmond and Glover,2002). These findings may not be valid for fMRI studies withmultiple sessions, however, as factors that are stable within a session(e.g. subject position in the scanner) can differ between sessions(Genovese et al., 1997). To obtain an estimate of this between-session measurement error, a test–retest reliability analysis measur-ing the same variable on the same sample of subjects should beperformed, in absence of any between-measurement experimentalmanipulation.

A number of studies has investigated the test–retest reliabilityof fMRI, and reported almost perfect (Aron et al., 2006; Fernandezet al., 2003; Specht et al., 2003) to at best moderate reliability(Raemaekers et al., 2007; Wei et al., 2004). The majority of studiesexpressed test–retest reliability of fMRI signal changes in terms of

www.elsevier.com/locate/ynimgNeuroImage 42 (2008) 196–206

⁎ Corresponding author. Rudolf Magnus Institute of Neuroscience,University Medical Center Utrecht, Room A.01.126, P.O. Box 85500,NL-3508 GA Utrecht, The Netherlands. Fax: +31 88 7555443.

E-mail address: [email protected] (B.B. Zandbelt).Available online on ScienceDirect (www.sciencedirect.com).

1053-8119/$ - see front matter © 2008 Elsevier Inc. All rights reserved.doi:10.1016/j.neuroimage.2008.04.183

“© 2010 Zandbelt, Vink. This is an open-access article distributed under the terms of the Creative

Commons Attribution License, which permits unrestricted use, distribution, and reproduction in

any medium, provided the original author and source are credited.”

“© 2008 Elsevier Inc. All rights reserved.”

Non-Open Access Open Access

Open publishing enables you to benefit from transparent peer review

Bram Zandbelt, 2017Sources: Dumas-Mallet et al., R Soc Open Sci, 2017; Kowalczuk et al., BMC Open, 2015;

© 2017 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited

Review History

RSOS-160254.R0 (Original submission) Review form: Reviewer 1 (Malcolm Macleod) Is the manuscript scientifically sound in its present form? Yes Are the interpretations and conclusions justified by the results? Yes Is the language acceptable? Yes Is it clear how to access all supporting data? At this stage it is not clear. I would suggest they include details of all publications in a dataset reposited e.g. at FigShare Do you have any ethical concerns with this paper? No

Low statistical power in biomedical science: a review of three human research domains

Estelle Dumas-Mallet, Katherine S. Button, Thomas Boraud, Francois Gonon and Marcus R. Munafò

Article citation details R. Soc. open sci. 4: 160254. http://dx.doi.org/10.1098/rsos.160254

Review timeline Original submission: 12 April 2016 1st revised submission: 1 July 2016 2nd revised submission: 14 October 2016 3rd revised submission: 2 December 2016 Final acceptance: 4 January 2017

Note: Reports are unedited and appear as submitted by the referee. The review history appears in chronological order.

on February 7, 2017http://rsos.royalsocietypublishing.org/Downloaded from

Some OA journals provideopen peer review

Open peer review has slightly higher qualitythan single-blind peer review

Bram Zandbelt, 2017

Open publishing can be achieved in multiple ways

Source: https://http://www.sherpa.ac.uk/romeo/ data retrieved on Jan 20, 2017

1. Open Access journals

Bram Zandbelt, 2017

AIMS Neurosci

BMC NeurosciBMC PsycholBMC Psychiatry

eLife

Front Hum NeurosciFront NeurosciFront Psychol

Scientific Data Scientific ReportsTranslational Psychiatry

Nat CommunNat Hum Behav

PeerJPLoS BiolPLoS Comp BiolPLoS ONE

R Soc Open ScieNeuro

Open publishing can be achieved in multiple ways

1. Open Access journals

Source: https://http://www.sherpa.ac.uk/romeo/ data retrieved on Jan 20, 2017

Bram Zandbelt, 2017

Open publishing can be achieved in multiple ways

2. Hybrid journals

Source: https://http://www.sherpa.ac.uk/romeo/ data retrieved on Jan 20, 2017

Bram Zandbelt, 2017

JEP GeneralJEP HPPJEP LMCPsych RevPsychol Bull

Current BiologyNeuron*Trends Cogn Sci*Trends Neurosci*

Biol PsychiatryCortexCurr Opin Behav SciCurr Opin Neurosci Curr Opin PsycholEur NeuropsychopharmacolNeuroimageNeuropsychologia

J Cogn Neurosci

NeuropsychopharmacolMol Psychiatry

Proc Natl Acad SciBrainCereb CortexSoc Cogn Affect Neurosci

Phil Trans R Soc B J Neurosci

Att Perecpt PsychophysBrain Struct FuncCognitive Affect Behavior NeurosciExp Brain ResPsychopharmacol

Eur J NeurosciHum Brain Mapp

Open publishing can be achieved in multiple ways

2. Hybrid journals

Source: https://http://www.sherpa.ac.uk/romeo/ data retrieved on Jan 20, 2017

Bram Zandbelt, 2017

Open publishing can be achieved in multiple ways

3. Self-archiving

Source: https://http://www.sherpa.ac.uk/romeo/ data retrieved on Jan 20, 2017

Bram Zandbelt, 2017

Science AIMS Neurosci JAMA Psychiatry Ann Rev NeurosciAnn Rev Psychol J Neurophysiol

JEP GeneralJEP HPPJEP LMCPsych RevPsychol Bull

Am J PsychiatryBMC NeurosciBMC PsycholBMC Psychiatry

Current BiologyNeuronTrends Cogn SciTrends Neurosci

eLife

Biol PsychiatryCortexCurr Opin Behav SciCurr Opin Neurosci Curr Opin PsycholEur NeuropsychopharmacolNeuroimageNeuropsychologia

Front Hum NeurosciFront NeurosciFront Psychol

N Engl J Med J Cogn Neurosci

Nat Rev NeurosciNeuropsychopharmacolScientific Data Scientific ReportsTranslational Psychiatry

Mol Psychiatry NatureNat CommunNat Hum BehavNat Neurosci

Proc Natl Acad SciBrainCereb CortexSoc Cogn Affect Neurosci

PeerJPLoS BiolPLoS Comp BiolPLoS ONE

Phil Trans R Soc BR Soc Open Sci

eNeuroJ Neurosci

Eur J NeurosciHum Brain Mapp

Att Perecpt PsychophysBrain Struct FuncCognitive Affect Behavior NeurosciExp Brain ResPsychopharmacol

Open publishing can be achieved in multiple ways

none allowed

preprint only

pre- & postprint

allallowed

pub PDF ✘ ✘ ✘ ✔

postprint ✘ ✘ ✔ ✔

preprint ✘ ✔ ✔ ✔

3. Self-archiving

Source: https://http://www.sherpa.ac.uk/romeo/ data retrieved on Jan 20, 2017

Open publishing can be low-cost and even free of charge

1. Free of charge

R Soc Open Sci

Bram Zandbelt, 2017

Judgm Decis Mak

2. Free of charge in NL

Att Perecpt PsychophysBrain Struct FuncCognitive Affect Behavior NeurosciPsychon Bull RevPsychopharmacol

Cogn SciEur J NeurosciHum Brain MappPsychophysiolTopics Cogn Sci

Open publishing can be low-cost and even free of charge

3. Low-cost

Bram Zandbelt, 2017

PeerJ ($399-499, lifetime) RIO Journal (€50-650) Glossa (£300)J Open Psychol Data (£100)J Open Res Software (£100)

Cogent Biology (pay what you can)Cogent Psychology (pay what you can)

Collabra: Psychology ($875) F1000 Research ($1000)

SAGE Open ($395)

Bram Zandbelt, 2017

Publishing Funding

CareerCode & data sharing

Practicing open scienceprovides you with new funding opportunities

Individual fellowships

Open science awards

Replication awards and grants

Preregistration awards

Travel grant

Bram Zandbelt, 2017Sources: McKiernan et al., eLife, 2016;

Publication policies Data archiving policiesGold OA Green OA

Funder Whether Where Whether What When Where Whether What When Where

NWO (NL) Required OA/ hybrid Encouraged Pub. PDF or

postprint a.s.a.p. Any repository Encouraged - - -

Welcome Trust (UK) Required OA/

hybrid Required Pub. PDF orpostprint

< 6 mo. after

publicationEurope PMC

or PMC Required Data At publication -

EC Horizon 2020 (EU) Encouraged OA/

hybrid Required Pub. PDF orpostprint

< 12 mo. after

publicationAny

repository EncouragedData and code

a.s.a.p. Any repository

ERC (EU) Encouraged - Encouraged Pub. PDF orpostprint a.s.a.p. Any

repository EncouragedData and code

< 6 mo. after

publicationAny

repository

NIH (US) - - Required Pub. PDF orpostprint Required Data a.s.a.p. Any

repository

Practicing open scienceenables you to comply with funder mandates

Sources: https://http://www.sherpa.ac.uk/romeo/; data retrieved on Jan 20, 2017 Bram Zandbelt, 2017

Practicing open scienceenables you to comply with funder mandates

Bram Zandbelt, 2017Sources: McKiernan et al., eLife, 2016;

Bram Zandbelt, 2017

Publishing Funding

CareerCode & data sharing

Code and data sharing facilitates reuse and responding to requests

Can I have a copy of your

data?I am not sure where

my data is

Sources: Hanson, Surkis, Yacobucci, NYU Health Sciences Libraries, 2012; https://bit.ly/data_management_snafu Bram Zandbelt, 2017

Code and data sharing promotes good research practices and reduces errors

Version control Documentation

Code reviewunit testing File organization

Fewer reporting errors in OA papers

Bram Zandbelt, 2017Sources: Wicherts et al., PLoS ONE, 2011

Good research practices

Code and data sharingcomes with increased visibility

Papers with open code get more citations

Papers with open data get more citations

Bram Zandbelt, 2017Sources: Vandewalle, Comput Sci Eng, 2012; Piwowar & Vision, PeerJ, 2013.

Code and data sharingis beginning to be acknowledged

Bram Zandbelt, 2017Sources: https://osf.io/tvyxzl; Kidwell et al., PLoS Biol, 2016;

Code and data sharinghelps you increase your research output

Standalone code/data Data papers

Bram Zandbelt, 2017

Software papers

Bram Zandbelt, 2017

Publishing Funding

CareerCode & data sharing

Practicing open scienceprovides opportunities for scientific collaboration

Bram Zandbelt, 2017

RESEARCH ARTICLE SUMMARY◥

PSYCHOLOGY

Estimating the reproducibility ofpsychological scienceOpen Science Collaboration*

INTRODUCTION: Reproducibility is a defin-ing feature of science, but the extent to whichit characterizes current research is unknown.Scientific claims should not gain credencebecause of the status or authority of theiroriginator but by the replicability of theirsupporting evidence. Even research of exem-plary quality may have irreproducible empir-ical findings because of random or systematicerror.

RATIONALE: There is concern about the rateand predictors of reproducibility, but limitedevidence. Potentially problematic practices in-clude selective reporting, selective analysis, andinsufficient specification of the conditions nec-essary or sufficient to obtain the results. Directreplication is the attempt to recreate the con-ditions believed sufficient for obtaining a pre-

viously observed finding and is the means ofestablishing reproducibility of a finding withnew data. We conducted a large-scale, collab-orative effort to obtain an initial estimate ofthe reproducibility of psychological science.

RESULTS:We conducted replications of 100experimental and correlational studies pub-lished in three psychology journals using high-powered designs and original materials whenavailable. There is no single standard for eval-uating replication success. Here, we evaluatedreproducibility using significance and P values,effect sizes, subjective assessments of replica-tion teams, and meta-analysis of effect sizes.The mean effect size (r) of the replication ef-fects (Mr = 0.197, SD = 0.257) was half the mag-nitude of the mean effect size of the originaleffects (Mr = 0.403, SD = 0.188), representing a

substantial decline.Ninety-sevenpercent of orig-inal studies had significant results (P < .05).Thirty-six percent of replications had signifi-

cant results; 47% of origi-nal effect sizes were in the95% confidence intervalof the replication effectsize; 39% of effects weresubjectively rated to havereplicated the original re-

sult; and if no bias in original results is as-sumed, combining original and replicationresults left 68% with statistically significanteffects. Correlational tests suggest that repli-cation success was better predicted by thestrength of original evidence than by charac-teristics of the original and replication teams.

CONCLUSION:No single indicator sufficient-ly describes replication success, and the fiveindicators examined here are not the onlyways to evaluate reproducibility. Nonetheless,collectively these results offer a clear conclu-sion: A large portion of replications producedweaker evidence for the original findings de-spite using materials provided by the originalauthors, review in advance for methodologi-cal fidelity, and high statistical power to detectthe original effect sizes. Moreover, correlationalevidence is consistent with the conclusion thatvariation in the strength of initial evidence(such as original P value) was more predictiveof replication success than variation in thecharacteristics of the teams conducting theresearch (such as experience and expertise).The latter factors certainly can influence rep-lication success, but they did not appear to doso here.Reproducibility is not well understood be-

cause the incentives for individual scientistsprioritize novelty over replication. Innova-tion is the engine of discovery and is vital fora productive, effective scientific enterprise.However, innovative ideas become old newsfast. Journal reviewers and editors may dis-miss a new test of a published idea as un-original. The claim that “we already know this”belies the uncertainty of scientific evidence.Innovation points out paths that are possible;replication points out paths that are likely;progress relies on both. Replication can in-crease certainty when findings are reproducedand promote innovation when they are not.This project provides accumulating evidencefor many findings in psychological researchand suggests that there is still more work todo to verify whether we know what we thinkwe know.▪

RESEARCH

SCIENCE sciencemag.org 28 AUGUST 2015 • VOL 349 ISSUE 6251 943

The list of author affiliations is available in the full article online.*Corresponding author. E-mail: [email protected] this article as Open Science Collaboration, Science 349,aac4716 (2015). DOI: 10.1126/science.aac4716

Original study effect size versus replication effect size (correlation coefficients). Diagonalline represents replication effect size equal to original effect size. Dotted line represents replicationeffect size of 0. Points below the dotted line were effects in the opposite direction of the original.Density plots are separated by significant (blue) and nonsignificant (red) effects.

ON OUR WEB SITE◥

Read the full articleat http://dx.doi.org/10.1126/science.aac4716..................................................

on M

arch

7, 2

016

Dow

nloa

ded

from

on

Mar

ch 7

, 201

6D

ownl

oade

d fro

m o

n M

arch

7, 2

016

Dow

nloa

ded

from

on

Mar

ch 7

, 201

6D

ownl

oade

d fro

m o

n M

arch

7, 2

016

Dow

nloa

ded

from

on

Mar

ch 7

, 201

6D

ownl

oade

d fro

m o

n M

arch

7, 2

016

Dow

nloa

ded

from

on

Mar

ch 7

, 201

6D

ownl

oade

d fro

m o

n M

arch

7, 2

016

Dow

nloa

ded

from

Bram Zandbelt, 2017

Practicing open scienceprovides opportunities for scientific collaboration

Bram Zandbelt, 2017

Practicing open scienceprovides opportunities for scientific collaboration

Practicing open scienceenables you to contribute to open source projects

Bram Zandbelt, 2017

776 contributors

116 contributors 71 contributors

257 contributors

702 contributors

552 contributors

49 contributors

Practicing open science is increasingly valued and mandated by institutions

Source: https://www.academictransfer.com; http://www.nicebread.de/open-science-hiring-practices/ Bram Zandbelt, 2017

Take home message:Open science has many benefits for individual researchers

Bram Zandbelt, 2017

Open publishing… comes with greater visibility… lets you retain author rights and control reuse of materials… enables you to benefit from transparent peer review… can be achieved in many ways… can be low-cost and even free of charge

Practicing open science… provides you with new funding opportunities… enables you to comply with funder mandates

Practicing open science… provides new opportunities for scientific collaboration… is increasingly valued and mandated by institutions

Code and data sharing… facilitates reuse and responding to requests… promotes good research practices and reduces errors… is beginning to be acknowledged… comes with greater visibility… helps you increase your research output

… and we haven’t even talked about the benefits of preregistration :-)

Sources: Wagenmakers & Dutilh (2016). APS Observer. Bram Zandbelt, 2017

Bram Zandbelt, 2017

Bram Zandbelt, 2017

Now, let’s learn about your views

1. Engaging in open science will boost my career

2. Engaging in open science will accelerate my research

3. Engaging in open science will improve the quality of my research

Proponent Opponent

vs.

Proponent Opponent

vs.

Proponent Opponent

vs.

Three debates