5
EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific publishing Nikolaus Kriegeskorte 1 *, Alexander Walther 1 and Diana Deca 2 1 Medical Research Council Cognition and Brain Sciences Unit, Cambridge, UK 2 Institute ofNeuroscience, Technische Universität München, Munich, Germany *Correspondence: [email protected] Edited by: Misha Tsodyks, Weizmann Institute of Science, Israel Reviewed by: Misha Tsodyks, Weizmann Institute of Science, Israel A scientific publication system needs to provide two basic ser- vices: access and evaluation. The traditional publication system restricts the access to papers by requiring payment, and it restricts the evaluation of papers by relying on just 2–4 pre-publication peer reviews and by keeping the reviews secret. As a result, the current system suffers from a lack of quality and transparency of the peer review process, and the only immediately available indi- cation of a new paper’s quality is the prestige of the journal it appeared in. Open access (OA) is now widely accepted as desirable and is beginning to become a reality. However, the second essential ele- ment, evaluation, has received less attention. Open evaluation (OE), an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current system and bring scientific publishing into the twenty-first century. Evaluation steers the attention of the scientific community, and thus the very course of science. For better or worse, the most visible papers determine the direction of each field, and guide funding and public policy decisions. Evaluation, therefore, is at the heart of the entire endeavor of science. As the num- ber of scientific publications explodes, evaluation, and selection will only gain importance. A grand challenge of our time, there- fore, is to design the future system, by which we evaluate papers and decide which ones deserve broad attention and deep read- ing. However, it is unclear how exactly OE and the future system for scientific publishing should work. This motivated us to edit the Research Topic “Beyond open access: visions for open eval- uation of scientific papers by post-publication peer review” in Frontiers in Computational Neuroscience. The Research Topic includes 18 papers, each going beyond mere criticism of the sta- tus quo and laying out a detailed vision for the ideal future system. The authors are from a wide variety of disciplines, including neuroscience, psychology, computer science, artificial intelligence, medicine, molecular biology, chemistry, and eco- nomics. The proposals could easily have turned out to contradict each other, with some authors favoring solutions that others advise against. However, our contributors’ visions are largely compat- ible. While each paper elaborates on particular challenges, the solutions proposed have much overlap, and where distinct solu- tions are proposed, these are generally compatible. This puts us in a position to present our synopsis here as a coherent blueprint for the future system that reflects the consensus among the contributors. 1 Each section heading below refers to a design feature of the future system that was a prevalent theme in the collection. If the feature was overwhelmingly endorsed, the sec- tion heading below is phrased as a statement. If at least two papers strongly advised against the feature, the section heading is phrased as a question. Figure 1 visualizes to what extent each paper encourages or discourages the inclusion of each design feature in the future system. The ratings used in Figure 1 have been agreed upon with the authors of the original papers. 2 SYNOPSIS OF THE EMERGING CONSENSUS THE EVALUATION PROCESS IS TOTALLY TRANSPARENT Almost all of the 18 visions favor total transparency. Total trans- parency means that all reviews and ratings are instantly published. This is in contrast to current practice, where the community is excluded and reviews are initially only visible to editors and later on to the authors (and ratings are often only visible to editors). Such secrecy opens the door to self-serving reviewer behavior, especially when the judgments are inherently subjec- tive, such as the judgment of the overall significance of a paper. In a secret reviewing system, the question of a paper’s signifi- cance may translate in some reviewers’ minds to the question “How comfortable am I with this paper gaining high visibil- ity now?” In a transparent evaluation system, the reviews and reviewers are subject to public scrutiny, and reviewers are thus more likely to ask themselves the more appropriate question “How likely is it that this paper will ultimately turn out to be important?” THE PUBLIC EVALUATIVE INFORMATION IS COMBINED INTO PAPER PRIORITY SCORES In a totally transparent evaluation process, the evaluative infor- mation (including reviews and ratings) is publicly available. 1 The consensus, of course, is only among the contributors to this collection. A consensus among the scientific community at large has yet to be established. Note that scientists critical of the general idea of OE would not have chosen to contribute here. Nevertheless, assuming OE is seen as desirable, the collection does suggest that independent minds will produce compatible visions for how to implement it. 2 With the exception of Erik Sandewall, whom we could not reach before this piece went to press. Frontiers in Computational Neuroscience www.frontiersin.org November 2012 | Volume 6 | Article 94 | 1 COMPUTATIONAL NEUROSCIENCE published: 15 November 2012

An emerging consensus for open evaluation: 18 visions for ... · EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An emerging consensus for open evaluation: 18 visions for ... · EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific

EDITORIAL

doi: 10.3389/fncom.2012.00094

An emerging consensus for open evaluation: 18 visionsfor the future of scientific publishingNikolaus Kriegeskorte 1*, Alexander Walther 1 and Diana Deca2

1 Medical Research Council Cognition and Brain Sciences Unit, Cambridge, UK2 Institute of Neuroscience, Technische Universität München, Munich, Germany*Correspondence: [email protected]

Edited by:

Misha Tsodyks, Weizmann Institute of Science, Israel

Reviewed by:

Misha Tsodyks, Weizmann Institute of Science, Israel

A scientific publication system needs to provide two basic ser-vices: access and evaluation. The traditional publication systemrestricts the access to papers by requiring payment, and it restrictsthe evaluation of papers by relying on just 2–4 pre-publicationpeer reviews and by keeping the reviews secret. As a result, thecurrent system suffers from a lack of quality and transparency ofthe peer review process, and the only immediately available indi-cation of a new paper’s quality is the prestige of the journal itappeared in.

Open access (OA) is now widely accepted as desirable and isbeginning to become a reality. However, the second essential ele-ment, evaluation, has received less attention. Open evaluation(OE), an ongoing post-publication process of transparent peerreview and rating of papers, promises to address the problemsof the current system and bring scientific publishing into thetwenty-first century.

Evaluation steers the attention of the scientific community,and thus the very course of science. For better or worse, themost visible papers determine the direction of each field, andguide funding and public policy decisions. Evaluation, therefore,is at the heart of the entire endeavor of science. As the num-ber of scientific publications explodes, evaluation, and selectionwill only gain importance. A grand challenge of our time, there-fore, is to design the future system, by which we evaluate papersand decide which ones deserve broad attention and deep read-ing. However, it is unclear how exactly OE and the future systemfor scientific publishing should work. This motivated us to editthe Research Topic “Beyond open access: visions for open eval-uation of scientific papers by post-publication peer review” inFrontiers in Computational Neuroscience. The Research Topicincludes 18 papers, each going beyond mere criticism of the sta-tus quo and laying out a detailed vision for the ideal futuresystem. The authors are from a wide variety of disciplines,including neuroscience, psychology, computer science, artificialintelligence, medicine, molecular biology, chemistry, and eco-nomics.

The proposals could easily have turned out to contradict eachother, with some authors favoring solutions that others adviseagainst. However, our contributors’ visions are largely compat-ible. While each paper elaborates on particular challenges, thesolutions proposed have much overlap, and where distinct solu-tions are proposed, these are generally compatible. This putsus in a position to present our synopsis here as a coherent

blueprint for the future system that reflects the consensus amongthe contributors.1 Each section heading below refers to a designfeature of the future system that was a prevalent theme in thecollection. If the feature was overwhelmingly endorsed, the sec-tion heading below is phrased as a statement. If at least twopapers strongly advised against the feature, the section headingis phrased as a question. Figure 1 visualizes to what extent eachpaper encourages or discourages the inclusion of each designfeature in the future system. The ratings used in Figure 1 havebeen agreed upon with the authors of the original papers. 2

SYNOPSIS OF THE EMERGING CONSENSUSTHE EVALUATION PROCESS IS TOTALLY TRANSPARENTAlmost all of the 18 visions favor total transparency. Total trans-parency means that all reviews and ratings are instantly published.This is in contrast to current practice, where the communityis excluded and reviews are initially only visible to editors andlater on to the authors (and ratings are often only visible toeditors). Such secrecy opens the door to self-serving reviewerbehavior, especially when the judgments are inherently subjec-tive, such as the judgment of the overall significance of a paper.In a secret reviewing system, the question of a paper’s signifi-cance may translate in some reviewers’ minds to the question“How comfortable am I with this paper gaining high visibil-ity now?” In a transparent evaluation system, the reviews andreviewers are subject to public scrutiny, and reviewers are thusmore likely to ask themselves the more appropriate question“How likely is it that this paper will ultimately turn out to beimportant?”

THE PUBLIC EVALUATIVE INFORMATION IS COMBINED INTO PAPERPRIORITY SCORESIn a totally transparent evaluation process, the evaluative infor-mation (including reviews and ratings) is publicly available.

1The consensus, of course, is only among the contributors to this collection. Aconsensus among the scientific community at large has yet to be established.Note that scientists critical of the general idea of OE would not have chosen tocontribute here. Nevertheless, assuming OE is seen as desirable, the collectiondoes suggest that independent minds will produce compatible visions for howto implement it.2With the exception of Erik Sandewall, whom we could not reach before thispiece went to press.

Frontiers in Computational Neuroscience www.frontiersin.org November 2012 | Volume 6 | Article 94 | 1

COMPUTATIONAL NEUROSCIENCE published: 15 November 2012

Page 2: An emerging consensus for open evaluation: 18 visions for ... · EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific

Kriegeskorte et al. Emerging consensus for open evaluation

FIGURE 1 | Overview of key design features across the 18 visions.

The design features on the left capture major recurrent themes thatwere addressed (positively or negatively) in the Research Topic on OE.The columns indicate to what extent each design feature is a key element(red), actively endorsed (light red), not elaborated upon (white),discouraged (light blue), or strongly discouraged (blue) in each of the 18visions. Overall, there is wide agreement on the usefulness of most of thefeatures (prevalence of light red and red) and limited controversy (red and

blue cells in the same row), indicating an emerging consensus. The 18visions are indicated by their first author in alphabetical order at the top.The papers are Bachmann (2011); Birukou et al. (2011); Florian (2012);Ghosh et al. (2012); Hartshorne and Schachner (2012); Hunter (2012);Ietto-Gillies (2012); Kravitz and Baker (2011); Kreiman and Maunsell (2011);Kriegeskorte (2012); Lee (2012); Pöschl (2012); Priem and Hemminger(2012); Sandewall (2012); Walther and van den Bosch (2012); Wichertset al. (2012); Yarkoni (2012), and Zimmermann et al. (2012).

Most of the authors suggest the use of functions that combinethe evaluative evidence into an overall paper priority score thatproduces a ranking of all papers. Such a score could be com-puted as an average of the ratings. The individual ratings couldbe weighted in the average, so as to control the relative influenceof different rating scales (e.g., reliability vs. novelty vs. impor-tance of the claims) and to give greater weight to raters that areeither highly regarded in the field (by some quantitative mea-sure, such as the h-index) or have proved to be reliable raters inthe past.

ANY GROUP OR INDIVIDUAL CAN DEFINE A FORMULA FORPRIORITIZING PAPERS, FOSTERING A PLURALITY OFEVALUATIVE PERSPECTIVESMost authors support the idea that a plurality of evalu-ative perspectives on the literature is desirable. Ratherthan creating a centralized black-box system that ranks

the entire literature, any group or individual should beenabled to access the evaluative information and combineit by an arbitrary formula to prioritize the literature. Aconstant evolution of competing priority scores will alsomake it harder to manipulate the perceived importance of apaper.

SHOULD EVALUATION BEGIN WITH A CLOSED, PRE-PUBLICATIONSTAGE?Whether a closed, pre-publication stage of evaluation (such as thecurrent system’s secret peer review) is desirable is controversial.On the one hand, the absence of any pre-publication filteringmay open the gates to a flood of low-quality publications. On theother hand, providing permanent public access to a wide rangeof papers, including those that do not initially meet enthusiasm,may be a strength rather than a weakness. Much brilliant sciencewas initially misunderstood. Pre-publication filtering comes at

Frontiers in Computational Neuroscience www.frontiersin.org November 2012 | Volume 6 | Article 94 | 2

Page 3: An emerging consensus for open evaluation: 18 visions for ... · EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific

Kriegeskorte et al. Emerging consensus for open evaluation

the cost of a permanent loss of value through errors in the ini-tial evaluations. The benefit of publishing all papers may, thus,outweigh the cost of providing the necessary storage and access.“Publish, then filter” is one of the central principles that lendthe web its power (Shirky, 2008). It might work equally wellin science as it does in other domains, with post-publicationfiltering preventing the flood from cluttering our view of theliterature.

SHOULD THE OPEN EVALUATION BEGIN WITH A DISTINCT STAGE, INWHICH THE PAPER IS NOT YET CONSIDERED “APPROVED”?Instead of a closed, pre-publication evaluation, we could definea distinct initial stage of the post-publication open evaluationthat determines whether a paper receives an “approved” label.Whether this is desirable is controversial among the 18 visions.One argument in favor of an “approved” label is that it couldserve the function of the current notion of “peer reviewedscience,” suggesting that the claims made are somewhat reli-able. However, the strength of post-publication OE is ongoingand continuous evaluation. An “approved” label would createan artificial dichotomy based on an arbitrary threshold (onsome paper evaluation function). It might make it more dif-ficult for the system to correct its errors as more evaluativeevidence comes in (unless papers can cross back over to the“unapproved” state). Another argument in favor of an initialdistinct stage of OE is that it could serve to incorporate anearly round of review and revision. The authors could chooseto either accept the initial evaluation, or revise the paper andtrigger re-evaluation. However, revision and re-evaluation wouldbe possible at any point of an open evaluation process anyway.Moreover, authors can always seek informal feedback (either pri-vately among trusted associates or publicly via blogs) prior toformal publication.

THE EVALUATION PROCESS INCLUDES WRITTEN REVIEWS,NUMERICAL RATINGS, USAGE STATISTICS, SOCIAL-WEBINFORMATION, AND CITATIONSThere is a strong consensus that the OE process should includewritten reviews and numerical ratings. These classical elementsof peer review continue to be useful. They represent explicitexpert judgments and serve an important function that is dis-tinct from the function of usage statistics and social-web infor-mation, which are also seen as useful by some of the authors.In contrast to explicit expert judgments, usage statistics, andsocial-web information may highlight anything that receivesattention (of the positive or negative variety), thus poten-tially valuing buzz and controversy over high-quality science.Finally, citations provide a slow signal of paper quality, emerg-ing years after publication. Because citations are slow to emerge,they cannot replace the other signals. However, they arguablyprovide the ultimately definitive signal of a paper’s de-factoimportance.

THE SYSTEM UTILIZES SIGNED (ALONG WITH UNSIGNED)EVALUATIONSSigned evaluations are a key element of five of the visions, only onevision strongly discourages heavy reliance on signed evaluations.

When an evaluation is signed, it affects the evaluator’s reputa-tion. High-quality signed evaluations can help build a scientist’sreputation (thus motivating scientists to contribute). Conversely,low-quality signed evaluations can hurt a scientist’s reputation(thus motivating high standards in rating and reviewing). Signingcreates an incentive for objectivity and a disincentive for self-serving judgments. But as signing adds weight to the act ofevaluation, it might also create hesitation. Hesitation to pro-vide a rash judgment may be desirable, but the system doesrequire sufficient participation. Moreover, signing may createa disincentive to present critical arguments as evaluators mayfear potential social consequences of their criticism. The OEsystem should therefore collect both signed and unsigned eval-uations, and combine the advantages of these two types ofevaluation.

EVALUATORS’ IDENTITIES ARE AUTHENTICATEDAuthentication of evaluator identities is a key element of five ofthe visions, one vision strongly discourages it. Authenticationcould be achieved by requiring login with a password beforesubmitting evaluations. Authenticating the evaluator’s identitydoes not mean that the evaluator has to publicly sign theevaluation, but would enable the system to exclude lay peo-ple from the evaluation process and to relate multiple reviewsand ratings provided by the same person. This could be use-ful for assessing biases and estimating the predictive powerof the evaluations. Arguments against authenticating evalua-tor identities (unless the evaluator chooses to sign) are thatit creates a barrier to participation and compromises trans-parency (the “system,” but not the public knows the iden-tity). However, authentication could use public aliases, allow-ing virtual evaluator identities (similar to blogger identities) tobe tracked without any secret identity tracking. Note that (1)anonymous, (2) authenticated-unsigned, and (3) authenticated-signed evaluations each have different strengths and weak-nesses and could all be collected in the same system. It wouldthen fall to the designers of paper evaluation functions to decidehow to optimally combine the different qualities of evaluativeevidence.

REVIEWS AND RATINGS ARE META-EVALUATEDMost authors suggest meta-evaluation of individual evaluations.One model for meta-evaluation is to treat reviews and ratingslike papers, such that paper evaluations and meta-evaluationscan utilize the same system. Paper evaluation functions couldretrieve meta-evaluations recursively and use this informationfor weighting the primary evaluations of each paper. Noneof the contributors to the Research Topic object to meta-evaluation.

PARTICIPATING SCIENTISTS ARE EVALUATED IN TERMS OFSCIENTIFIC OR REVIEWING PERFORMANCE IN ORDER TOWEIGHT PAPER EVALUATIONSAlmost all authors suggest that the system evaluate the eval-uators. Evaluations of evaluators would be useful for weight-ing the multiple evaluations a given new paper receives.Note that this will require some form of authentication

Frontiers in Computational Neuroscience www.frontiersin.org November 2012 | Volume 6 | Article 94 | 3

Page 4: An emerging consensus for open evaluation: 18 visions for ... · EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific

Kriegeskorte et al. Emerging consensus for open evaluation

of the evaluators’ identities. Scientists could be evaluated by com-bining the evaluations of their publications. A citation-basedexample of this is the h-index, but the more rapidly availablepaper evaluations provided by the new system could also beused to evaluate an individual’s scientific performance. Moreover,the predictive power of a scientist’s previous evaluations couldbe estimated as an index of reviewing performance. An eval-uation might be considered predictive to the extent that itdeviates from previous evaluations, but matches later aggregateopinion.

THE OPEN EVALUATION PROCESS IS PERPETUALLY ONGOING, SUCHTHAT PROMISING PAPERS ARE MORE DEEPLY EVALUATEDAlmost all authors suggest a perpetually ongoing OE process.Ongoing evaluation means that there is no time limit on theevaluation process for a given paper. This enables the OE pro-cess to accumulate deeper and broader evaluative evidence forpromising papers, and to self-correct when necessary, even if theerror is only discovered long after publication. Initially excit-ing papers that turn out to be incorrect could be debunked.Conversely, initially misunderstood papers could receive theirdue respect when the field comes to appreciate their contri-bution. None of the authors objects to perpetually ongoingevaluation.

FORMAL STATISTICAL INFERENCE IS A KEY COMPONENT OF THEEVALUATION PROCESSMany of the authors suggest a role for formal statistical infer-ence in the evaluation process. Confidence intervals on evaluationswould improve the way we allocate our attention, preventingus from preferring papers that are not significantly preferableand enabling us to appreciate the full range of excellent con-tributions, rather than only those that find their way onto astage of limited size, such as the pages of Science and Nature.To the extent that excellent papers do not significantly differ in

their evaluations, the necessary selection would rely on contentrelevance.

THE NEW SYSTEM CAN EVOLVE FROM THE PRESENT ONE, REQUIRINGNO SUDDEN REVOLUTIONARY CHANGEAlmost all authors suggest that the ideal system for scientificpublishing can evolve from the present one, requiring no sud-den revolutionary change. The key missing element is a pow-erful general OE system. An OE system could initially serveto more broadly and deeply evaluate papers published in thecurrent system. Once OE has proven its power and its evalua-tions are widely trusted, traditional pre-publication peer reviewwill no longer be needed to establish a paper as part of theliterature. Although the ideal system can evolve, it might takea major public investment (comparable to the establishmentof PubMed) to provide a truly transparent, widely trustedOE system that is independent of the for-profit publishingindustry.

CONCLUDING REMARKSOA and OE are the two complementary elements that will bringscientific publishing into the twenty-first century. So far scien-tists have left the design of the evaluation process to journalsand publishing companies. However, the steering mechanism ofscience should be designed by scientists. The cognitive, com-putational, and brain sciences are best prepared to take onthis task, which will involve social and psychological consider-ations, software design, modeling of the network of scientificpapers and their interrelationships, and inference on the reliabil-ity and importance of scientific claims. Ideally, the future systemwill derive its authority from a scientific literature on OE andon methods for inference from the public evaluative evidence.We hope that the largely converging and compatible argumentsin the papers of the present collection will provide a startingpoint.

REFERENCESBachmann, T. (2011). Fair and open

evaluation may call for temporar-ily hidden authorship, cautionwhen counting the votes, andtransparency of the full pre-publication procedure. Front.Comput. Neurosci. 5:61. doi:10.3389/fncom.2011.00061

Birukou, A., Wakeling, J. R., Bartolini,C., Casati, F., Marchese, M.,Mirylenka, K., et al. (2011).Alternatives to peer review: novelapproaches for research evaluation.Front. Comput. Neurosci. 5:56. doi:10.3389/fncom.2011.00056

Florian, R. V. (2012). Aggregating post-publication peer reviews and rat-ings. Front. Comput. Neurosci. 6:31.doi: 10.3389/fncom.2012.00031

Ghosh, S. S., Klein, A., Avants, B., andMillman, K. J. (2012). Learningfrom open source software projectsto improve scientific review.

Front. Comput. Neurosci. 6:18. doi:10.3389/fncom.2012.00018

Hartshorne, J. K., and Schachner, A.(2012). Tracking replicability as amethod of post-publication openevaluation. Front. Comput. Neurosci.6:8. doi: 10.3389/fncom.2012.00008

Hunter, J. (2012). Post-publicationpeer review: opening upscientific conversation. Front.Comput. Neurosci. 6:63. doi:10.3389/fncom.2012.00063

Ietto-Gillies, G. (2012). The evalu-ation of research papers in theXXI century. The Open PeerDiscussion system of the WorldEconomics Association. Front.Comput. Neurosci. 6:54. doi:10.3389/fncom.2012.00054

Kravitz, D. J., and Baker, C. I. (2011).Toward a new model of scientificpublishing: discussion and a pro-posal. Front. Comput. Neurosci. 5:55.doi: 10.3389/fncom.2011.00055

Kreiman, G., and Maunsell, J.(2011). Nine criteria for a mea-sure of scientific output. Front.Comput. Neurosci. 5:48. doi:10.3389/fncom.2011.00048

Kriegeskorte, N. (2012). Open eval-uation: a vision for entirelytransparent post-publicationpeer review and rating for sci-ence. Front. Comput. Neurosci.6:79. doi: 10.3389/fncom.2012.00079

Lee, C. (2012). Open peer reviewby a selected-papers network.Front. Comput. Neurosci. 6:1. doi:10.3389/fncom.2012.00001

Pöschl, U. (2012). Multi-stage openpeer review: scientific evalua-tion integrating the strengths oftraditional peer review with thevirtues of transparency and self-regulation. Front. Comput. Neurosci.6:33. doi: 10.3389/fncom.2012.00033

Priem, J., and Hemminger, B.M. (2012). Decoupling thescholarly journal. Front.Comput. Neurosci. 6:19. doi:10.3389/fncom.2012.00019

Sandewall, E. (2012). Maintaining livediscussion in two-stage open peerreview. Front. Comput. Neurosci.6:9. doi: 10.3389/fncom.2012.00009

Shirky, C. (2008). Here ComesEverybody: The Power of OrganizingWithout Organizations. New York,NY: Penguin Press.

Walther, A., and van den Bosch, J. F.(2012). FOSE: a framework foropen science evaluation. Front.Comput. Neurosci. 6:32. doi:10.3389/fncom.2012.00032

Wicherts, J. M., Kievit, R. A., Bakker,M., and Borsboom, D. (2012).Letting the daylight in: Reviewingthe reviewers and other ways tomaximize transparency in science.

Frontiers in Computational Neuroscience www.frontiersin.org November 2012 | Volume 6 | Article 94 | 4

Page 5: An emerging consensus for open evaluation: 18 visions for ... · EDITORIAL doi: 10.3389/fncom.2012.00094 An emerging consensus for open evaluation: 18 visions for the future of scientific

Kriegeskorte et al. Emerging consensus for open evaluation

Front. Comput. Neurosci. 6:20. doi:10.3389/fncom.2012.00020

Yarkoni, T. (2012). Designingnext-generation platforms forevaluating scientific output: whatscientists can learn from the socialweb. Front. Comput. Neurosci.6:72. doi: 10.3389/fncom.2012.00072

Zimmermann, J., Roebroeck, A.,Uludag, K., Sack, A. T.,Formisano, E., Jansma, B.,et al. (2012). Network-basedstatistics for a community driventransparent publication pro-cess. Front. Comput. Neurosci.6:11. doi: 10.3389/fncom.2012.00011

Received: 23 October 2012; accepted:24 October 2012; published online:November 2012.Citation: Kriegeskorte N, Walther A andDeca D (2012) An emerging consensusfor open evaluation: 18 visions for thefuture of scientific publishing. Front.Comput. Neurosci. 6:94. doi: 10.3389/fncom.2012.00094

Copyright © 2012 Kriegeskorte,Walther and Deca. This is an open-accessarticle distributed under the terms of theCreative Commons Attribution License,which permits use, distribution andreproduction in other forums, providedthe original authors and source are cred-ited and subject to any copyright noticesconcerning any third-party graphics etc.

Frontiers in Computational Neuroscience www.frontiersin.org November 2012 | Volume 6 | Article 94 | 5

15